Test Report: Hyper-V_Windows 19598

                    
                      cb70ad94d69a229bf8d3511a5a00af396fa2386e:2024-09-10:36157
                    
                

Test fail (40/201)

Order failed test Duration
33 TestAddons/parallel/Registry 118.82
56 TestErrorSpam/setup 174.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 29.84
81 TestFunctional/serial/ExtraConfig 269.63
82 TestFunctional/serial/ComponentHealth 120.72
85 TestFunctional/serial/InvalidService 4.22
91 TestFunctional/parallel/StatusCmd 127.02
95 TestFunctional/parallel/ServiceCmdConnect 181.45
97 TestFunctional/parallel/PersistentVolumeClaim 409.64
101 TestFunctional/parallel/MySQL 240.59
107 TestFunctional/parallel/NodeLabels 185.73
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 7.29
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 4.21
127 TestFunctional/parallel/ImageCommands/ImageListShort 48.9
128 TestFunctional/parallel/ImageCommands/ImageListTable 60.24
129 TestFunctional/parallel/ImageCommands/ImageListJson 60.18
130 TestFunctional/parallel/ImageCommands/ImageListYaml 47.07
131 TestFunctional/parallel/ImageCommands/ImageBuild 120.42
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 116.45
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 120.63
135 TestFunctional/parallel/DockerEnv/powershell 421.84
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 120.57
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 60.47
141 TestFunctional/parallel/ServiceCmd/DeployApp 2.16
142 TestFunctional/parallel/ServiceCmd/List 6.52
143 TestFunctional/parallel/ServiceCmd/JSONOutput 6.47
144 TestFunctional/parallel/ServiceCmd/HTTPS 6.34
145 TestFunctional/parallel/ServiceCmd/Format 6.28
146 TestFunctional/parallel/ServiceCmd/URL 6.25
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.37
158 TestMultiControlPlane/serial/PingHostFromPods 63.93
165 TestMultiControlPlane/serial/RestartSecondaryNode 256.7
176 TestJSONOutput/start/Command 193.08
182 TestJSONOutput/pause/Command 7.53
188 TestJSONOutput/unpause/Command 52.62
221 TestMultiNode/serial/PingHostFrom2Pods 52.35
228 TestMultiNode/serial/RestartKeepsNodes 557.19
229 TestMultiNode/serial/DeleteNode 47.29
241 TestKubernetesUpgrade 10800.395
254 TestNoKubernetes/serial/StartWithK8s 307.69
x
+
TestAddons/parallel/Registry (118.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 5.9268ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-jj9rm" [347bd0df-17e0-49d7-9fea-54ec3391c705] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0082755s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-b65qr" [8975ac76-f502-459b-8189-457e92575e4b] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0174364s
addons_test.go:342: (dbg) Run:  kubectl --context addons-218100 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-218100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-218100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.184327s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-218100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-218100 ip
addons_test.go:361: (dbg) Done: out/minikube-windows-amd64.exe -p addons-218100 ip: (2.3225081s)
2024/09/10 17:50:05 [DEBUG] GET http://172.31.219.103:5000
addons_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-218100 addons disable registry --alsologtostderr -v=1
addons_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe -p addons-218100 addons disable registry --alsologtostderr -v=1: (13.6381111s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-218100 -n addons-218100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-218100 -n addons-218100: (11.5134283s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-218100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-218100 logs -n 25: (8.8851844s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p download-only-884300                                                                     | download-only-884300 | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:31 UTC | 10 Sep 24 17:31 UTC |
	| start   | -o=json --download-only                                                                     | download-only-893900 | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:31 UTC |                     |
	|         | -p download-only-893900                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:32 UTC | 10 Sep 24 17:32 UTC |
	| delete  | -p download-only-893900                                                                     | download-only-893900 | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:32 UTC | 10 Sep 24 17:32 UTC |
	| delete  | -p download-only-884300                                                                     | download-only-884300 | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:32 UTC | 10 Sep 24 17:32 UTC |
	| delete  | -p download-only-893900                                                                     | download-only-893900 | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:32 UTC | 10 Sep 24 17:32 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-282900 | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:32 UTC |                     |
	|         | binary-mirror-282900                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:62797                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-282900                                                                     | binary-mirror-282900 | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:32 UTC | 10 Sep 24 17:32 UTC |
	| addons  | disable dashboard -p                                                                        | addons-218100        | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:32 UTC |                     |
	|         | addons-218100                                                                               |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-218100        | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:32 UTC |                     |
	|         | addons-218100                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-218100 --wait=true                                                                | addons-218100        | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:32 UTC | 10 Sep 24 17:39 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |                   |         |                     |                     |
	|         | --driver=hyperv --addons=ingress                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-218100 addons disable                                                                | addons-218100        | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:39 UTC | 10 Sep 24 17:40 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-218100        | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:48 UTC | 10 Sep 24 17:49 UTC |
	|         | addons-218100                                                                               |                      |                   |         |                     |                     |
	| ssh     | addons-218100 ssh cat                                                                       | addons-218100        | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:49 UTC | 10 Sep 24 17:49 UTC |
	|         | /opt/local-path-provisioner/pvc-4de6a817-3f84-46d6-85bd-410059542afd_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| addons  | addons-218100 addons disable                                                                | addons-218100        | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:49 UTC | 10 Sep 24 17:50 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-218100 addons                                                                        | addons-218100        | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:49 UTC | 10 Sep 24 17:49 UTC |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-218100 addons                                                                        | addons-218100        | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:49 UTC | 10 Sep 24 17:49 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-218100 addons disable                                                                | addons-218100        | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:49 UTC | 10 Sep 24 17:49 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-218100 addons                                                                        | addons-218100        | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:49 UTC | 10 Sep 24 17:50 UTC |
	|         | disable volumesnapshots                                                                     |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-218100        | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:49 UTC | 10 Sep 24 17:50 UTC |
	|         | -p addons-218100                                                                            |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ip      | addons-218100 ip                                                                            | addons-218100        | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:50 UTC | 10 Sep 24 17:50 UTC |
	| addons  | addons-218100 addons disable                                                                | addons-218100        | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:50 UTC | 10 Sep 24 17:50 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-218100 addons disable                                                                | addons-218100        | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:50 UTC |                     |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-218100        | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:50 UTC |                     |
	|         | -p addons-218100                                                                            |                      |                   |         |                     |                     |
	| addons  | addons-218100 addons disable                                                                | addons-218100        | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:50 UTC |                     |
	|         | headlamp --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 17:32:19
	Running on machine: minikube5
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 17:32:19.331561   11412 out.go:345] Setting OutFile to fd 816 ...
	I0910 17:32:19.379560   11412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:32:19.379560   11412 out.go:358] Setting ErrFile to fd 824...
	I0910 17:32:19.379560   11412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:32:19.397551   11412 out.go:352] Setting JSON to false
	I0910 17:32:19.400552   11412 start.go:129] hostinfo: {"hostname":"minikube5","uptime":100802,"bootTime":1725888736,"procs":178,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4842 Build 19045.4842","kernelVersion":"10.0.19045.4842 Build 19045.4842","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0910 17:32:19.400552   11412 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 17:32:19.404552   11412 out.go:177] * [addons-218100] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	I0910 17:32:19.408552   11412 notify.go:220] Checking for updates...
	I0910 17:32:19.408552   11412 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 17:32:19.410555   11412 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 17:32:19.413558   11412 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0910 17:32:19.415554   11412 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 17:32:19.417558   11412 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:32:19.421553   11412 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:32:24.545739   11412 out.go:177] * Using the hyperv driver based on user configuration
	I0910 17:32:24.549576   11412 start.go:297] selected driver: hyperv
	I0910 17:32:24.549693   11412 start.go:901] validating driver "hyperv" against <nil>
	I0910 17:32:24.549693   11412 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 17:32:24.591905   11412 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 17:32:24.593987   11412 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:32:24.594079   11412 cni.go:84] Creating CNI manager for ""
	I0910 17:32:24.594079   11412 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 17:32:24.594179   11412 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 17:32:24.594447   11412 start.go:340] cluster config:
	{Name:addons-218100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-218100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:32:24.594826   11412 iso.go:125] acquiring lock: {Name:mkb663b978f90cccd76348bddb3ea027f6ac4252 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:32:24.599093   11412 out.go:177] * Starting "addons-218100" primary control-plane node in "addons-218100" cluster
	I0910 17:32:24.601674   11412 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 17:32:24.601847   11412 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0910 17:32:24.601847   11412 cache.go:56] Caching tarball of preloaded images
	I0910 17:32:24.602181   11412 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0910 17:32:24.602364   11412 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 17:32:24.602973   11412 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\config.json ...
	I0910 17:32:24.603213   11412 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\config.json: {Name:mkc3fe145a8daa36f202cf1d413f3d15f157c085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:32:24.603810   11412 start.go:360] acquireMachinesLock for addons-218100: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 17:32:24.603810   11412 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-218100"
	I0910 17:32:24.604418   11412 start.go:93] Provisioning new machine with config: &{Name:addons-218100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:addons-218100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 17:32:24.604418   11412 start.go:125] createHost starting for "" (driver="hyperv")
	I0910 17:32:24.610131   11412 out.go:235] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0910 17:32:24.610131   11412 start.go:159] libmachine.API.Create for "addons-218100" (driver="hyperv")
	I0910 17:32:24.610131   11412 client.go:168] LocalClient.Create starting
	I0910 17:32:24.610131   11412 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0910 17:32:24.854443   11412 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0910 17:32:25.035148   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0910 17:32:26.855939   11412 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0910 17:32:26.855939   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:32:26.856138   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0910 17:32:28.377359   11412 main.go:141] libmachine: [stdout =====>] : False
	
	I0910 17:32:28.378355   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:32:28.378355   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0910 17:32:29.671581   11412 main.go:141] libmachine: [stdout =====>] : True
	
	I0910 17:32:29.671581   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:32:29.671956   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0910 17:32:32.781268   11412 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0910 17:32:32.781353   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:32:32.783688   11412 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 17:32:33.131613   11412 main.go:141] libmachine: Creating SSH key...
	I0910 17:32:33.221893   11412 main.go:141] libmachine: Creating VM...
	I0910 17:32:33.221893   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0910 17:32:35.670545   11412 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0910 17:32:35.670545   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:32:35.670545   11412 main.go:141] libmachine: Using switch "Default Switch"
	I0910 17:32:35.670545   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0910 17:32:37.162768   11412 main.go:141] libmachine: [stdout =====>] : True
	
	I0910 17:32:37.162883   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:32:37.163088   11412 main.go:141] libmachine: Creating VHD
	I0910 17:32:37.163115   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0910 17:32:40.471238   11412 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 06C72582-1E60-4594-BF49-F8E57E4FBFE7
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0910 17:32:40.471238   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:32:40.471340   11412 main.go:141] libmachine: Writing magic tar header
	I0910 17:32:40.471427   11412 main.go:141] libmachine: Writing SSH key tar header
	I0910 17:32:40.481901   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0910 17:32:43.407771   11412 main.go:141] libmachine: [stdout =====>] : 
	I0910 17:32:43.407771   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:32:43.407771   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\disk.vhd' -SizeBytes 20000MB
	I0910 17:32:45.728930   11412 main.go:141] libmachine: [stdout =====>] : 
	I0910 17:32:45.728930   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:32:45.729746   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-218100 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0910 17:32:48.907990   11412 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-218100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0910 17:32:48.907990   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:32:48.907990   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-218100 -DynamicMemoryEnabled $false
	I0910 17:32:50.826621   11412 main.go:141] libmachine: [stdout =====>] : 
	I0910 17:32:50.826621   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:32:50.826621   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-218100 -Count 2
	I0910 17:32:52.690736   11412 main.go:141] libmachine: [stdout =====>] : 
	I0910 17:32:52.690736   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:32:52.690736   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-218100 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\boot2docker.iso'
	I0910 17:32:54.911448   11412 main.go:141] libmachine: [stdout =====>] : 
	I0910 17:32:54.911448   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:32:54.912137   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-218100 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\disk.vhd'
	I0910 17:32:57.208663   11412 main.go:141] libmachine: [stdout =====>] : 
	I0910 17:32:57.208663   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:32:57.208663   11412 main.go:141] libmachine: Starting VM...
	I0910 17:32:57.208663   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-218100
	I0910 17:32:59.911248   11412 main.go:141] libmachine: [stdout =====>] : 
	I0910 17:32:59.912051   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:32:59.912051   11412 main.go:141] libmachine: Waiting for host to start...
	I0910 17:32:59.912129   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:33:01.930063   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:33:01.930063   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:01.931172   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:33:04.145884   11412 main.go:141] libmachine: [stdout =====>] : 
	I0910 17:33:04.145884   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:05.159839   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:33:07.064479   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:33:07.064865   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:07.065040   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:33:09.291284   11412 main.go:141] libmachine: [stdout =====>] : 
	I0910 17:33:09.291284   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:10.300123   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:33:12.203683   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:33:12.203683   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:12.203845   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:33:14.363460   11412 main.go:141] libmachine: [stdout =====>] : 
	I0910 17:33:14.363460   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:15.378755   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:33:17.318707   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:33:17.318707   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:17.318966   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:33:19.547755   11412 main.go:141] libmachine: [stdout =====>] : 
	I0910 17:33:19.547755   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:20.550451   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:33:22.493852   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:33:22.493852   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:22.493948   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:33:24.819088   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:33:24.819354   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:24.819456   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:33:26.753566   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:33:26.754333   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:26.754392   11412 machine.go:93] provisionDockerMachine start ...
	I0910 17:33:26.754573   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:33:28.651677   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:33:28.651677   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:28.652366   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:33:30.871647   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:33:30.872014   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:30.875574   11412 main.go:141] libmachine: Using SSH client type: native
	I0910 17:33:30.889510   11412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.219.103 22 <nil> <nil>}
	I0910 17:33:30.889510   11412 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 17:33:31.031275   11412 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 17:33:31.031464   11412 buildroot.go:166] provisioning hostname "addons-218100"
	I0910 17:33:31.031672   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:33:32.915318   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:33:32.915318   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:32.915732   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:33:35.131504   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:33:35.132227   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:35.135739   11412 main.go:141] libmachine: Using SSH client type: native
	I0910 17:33:35.136340   11412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.219.103 22 <nil> <nil>}
	I0910 17:33:35.136340   11412 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-218100 && echo "addons-218100" | sudo tee /etc/hostname
	I0910 17:33:35.298888   11412 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-218100
	
	I0910 17:33:35.299473   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:33:37.129253   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:33:37.129253   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:37.130060   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:33:39.427930   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:33:39.427930   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:39.431841   11412 main.go:141] libmachine: Using SSH client type: native
	I0910 17:33:39.432492   11412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.219.103 22 <nil> <nil>}
	I0910 17:33:39.432492   11412 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-218100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-218100/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-218100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 17:33:39.593690   11412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:33:39.593690   11412 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0910 17:33:39.593690   11412 buildroot.go:174] setting up certificates
	I0910 17:33:39.593690   11412 provision.go:84] configureAuth start
	I0910 17:33:39.593690   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:33:41.429917   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:33:41.429917   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:41.430912   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:33:43.626019   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:33:43.626092   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:43.626186   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:33:45.483568   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:33:45.483568   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:45.483568   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:33:47.737920   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:33:47.738026   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:47.738107   11412 provision.go:143] copyHostCerts
	I0910 17:33:47.738175   11412 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0910 17:33:47.739754   11412 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0910 17:33:47.740611   11412 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0910 17:33:47.741869   11412 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-218100 san=[127.0.0.1 172.31.219.103 addons-218100 localhost minikube]
	I0910 17:33:47.844124   11412 provision.go:177] copyRemoteCerts
	I0910 17:33:47.853151   11412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 17:33:47.853151   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:33:49.719055   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:33:49.719055   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:49.719153   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:33:51.960590   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:33:51.960590   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:51.961075   11412 sshutil.go:53] new ssh client: &{IP:172.31.219.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\id_rsa Username:docker}
	I0910 17:33:52.071633   11412 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2181509s)
	I0910 17:33:52.072438   11412 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 17:33:52.113014   11412 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0910 17:33:52.154781   11412 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 17:33:52.198035   11412 provision.go:87] duration metric: took 12.6034941s to configureAuth
	I0910 17:33:52.198137   11412 buildroot.go:189] setting minikube options for container-runtime
	I0910 17:33:52.198730   11412 config.go:182] Loaded profile config "addons-218100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 17:33:52.198883   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:33:54.112888   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:33:54.112888   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:54.113439   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:33:56.366175   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:33:56.366175   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:56.371129   11412 main.go:141] libmachine: Using SSH client type: native
	I0910 17:33:56.371552   11412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.219.103 22 <nil> <nil>}
	I0910 17:33:56.371630   11412 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 17:33:56.527309   11412 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 17:33:56.527309   11412 buildroot.go:70] root file system type: tmpfs
	I0910 17:33:56.527309   11412 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 17:33:56.527309   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:33:58.398928   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:33:58.398928   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:33:58.398928   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:34:00.636013   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:34:00.636013   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:34:00.640076   11412 main.go:141] libmachine: Using SSH client type: native
	I0910 17:34:00.640351   11412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.219.103 22 <nil> <nil>}
	I0910 17:34:00.640351   11412 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 17:34:00.801867   11412 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 17:34:00.801990   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:34:02.716217   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:34:02.716217   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:34:02.716743   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:34:05.010417   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:34:05.010417   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:34:05.014061   11412 main.go:141] libmachine: Using SSH client type: native
	I0910 17:34:05.014684   11412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.219.103 22 <nil> <nil>}
	I0910 17:34:05.014684   11412 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 17:34:07.207017   11412 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 17:34:07.207017   11412 machine.go:96] duration metric: took 40.4498257s to provisionDockerMachine
	I0910 17:34:07.207017   11412 client.go:171] duration metric: took 1m42.5899495s to LocalClient.Create
	I0910 17:34:07.207553   11412 start.go:167] duration metric: took 1m42.5904852s to libmachine.API.Create "addons-218100"
	I0910 17:34:07.207553   11412 start.go:293] postStartSetup for "addons-218100" (driver="hyperv")
	I0910 17:34:07.207630   11412 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 17:34:07.216360   11412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 17:34:07.216360   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:34:09.092210   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:34:09.093038   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:34:09.093119   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:34:11.430139   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:34:11.430220   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:34:11.430555   11412 sshutil.go:53] new ssh client: &{IP:172.31.219.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\id_rsa Username:docker}
	I0910 17:34:11.537906   11412 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3211936s)
	I0910 17:34:11.548856   11412 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 17:34:11.555534   11412 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 17:34:11.555534   11412 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0910 17:34:11.556132   11412 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0910 17:34:11.556469   11412 start.go:296] duration metric: took 4.3485456s for postStartSetup
	I0910 17:34:11.559620   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:34:13.455165   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:34:13.455165   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:34:13.455638   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:34:15.710803   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:34:15.710803   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:34:15.711432   11412 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\config.json ...
	I0910 17:34:15.714246   11412 start.go:128] duration metric: took 1m51.1022572s to createHost
	I0910 17:34:15.714366   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:34:17.580388   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:34:17.580388   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:34:17.580533   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:34:19.846519   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:34:19.847291   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:34:19.851045   11412 main.go:141] libmachine: Using SSH client type: native
	I0910 17:34:19.851440   11412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.219.103 22 <nil> <nil>}
	I0910 17:34:19.851518   11412 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 17:34:20.001011   11412 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725989660.223051981
	
	I0910 17:34:20.001011   11412 fix.go:216] guest clock: 1725989660.223051981
	I0910 17:34:20.001162   11412 fix.go:229] Guest: 2024-09-10 17:34:20.223051981 +0000 UTC Remote: 2024-09-10 17:34:15.7143303 +0000 UTC m=+116.446289801 (delta=4.508721681s)
	I0910 17:34:20.001258   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:34:21.901061   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:34:21.901061   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:34:21.902003   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:34:24.193352   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:34:24.193352   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:34:24.197239   11412 main.go:141] libmachine: Using SSH client type: native
	I0910 17:34:24.197789   11412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.219.103 22 <nil> <nil>}
	I0910 17:34:24.197789   11412 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1725989660
	I0910 17:34:24.349088   11412 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Sep 10 17:34:20 UTC 2024
	
	I0910 17:34:24.349088   11412 fix.go:236] clock set: Tue Sep 10 17:34:20 UTC 2024
	 (err=<nil>)
	I0910 17:34:24.349088   11412 start.go:83] releasing machines lock for "addons-218100", held for 1m59.7371849s
	I0910 17:34:24.349828   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:34:26.240359   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:34:26.240359   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:34:26.241089   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:34:28.513548   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:34:28.513548   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:34:28.517055   11412 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0910 17:34:28.517676   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:34:28.527475   11412 ssh_runner.go:195] Run: cat /version.json
	I0910 17:34:28.527475   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:34:30.462707   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:34:30.462707   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:34:30.462707   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:34:30.463322   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:34:30.463322   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:34:30.463322   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:34:32.799150   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:34:32.800212   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:34:32.800538   11412 sshutil.go:53] new ssh client: &{IP:172.31.219.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\id_rsa Username:docker}
	I0910 17:34:32.824876   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:34:32.824876   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:34:32.825544   11412 sshutil.go:53] new ssh client: &{IP:172.31.219.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\id_rsa Username:docker}
	I0910 17:34:32.908510   11412 ssh_runner.go:235] Completed: cat /version.json: (4.3807404s)
	I0910 17:34:32.917777   11412 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.3999049s)
	W0910 17:34:32.917777   11412 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0910 17:34:32.921775   11412 ssh_runner.go:195] Run: systemctl --version
	I0910 17:34:32.938765   11412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 17:34:32.946883   11412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 17:34:32.955137   11412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 17:34:32.980350   11412 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 17:34:32.980457   11412 start.go:495] detecting cgroup driver to use...
	I0910 17:34:32.980965   11412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 17:34:33.020267   11412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 17:34:33.045922   11412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 17:34:33.064394   11412 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 17:34:33.071746   11412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 17:34:33.098176   11412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 17:34:33.122389   11412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	W0910 17:34:33.142712   11412 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0910 17:34:33.142712   11412 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0910 17:34:33.146919   11412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 17:34:33.174528   11412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 17:34:33.198742   11412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 17:34:33.222961   11412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 17:34:33.248629   11412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 17:34:33.277969   11412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 17:34:33.312305   11412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 17:34:33.340310   11412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:34:33.505834   11412 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 17:34:33.529765   11412 start.go:495] detecting cgroup driver to use...
	I0910 17:34:33.540305   11412 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 17:34:33.571526   11412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 17:34:33.602904   11412 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 17:34:33.635008   11412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 17:34:33.665657   11412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 17:34:33.695647   11412 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 17:34:33.760111   11412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 17:34:33.785213   11412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 17:34:33.825765   11412 ssh_runner.go:195] Run: which cri-dockerd
	I0910 17:34:33.839296   11412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 17:34:33.856615   11412 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 17:34:33.896637   11412 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 17:34:34.078421   11412 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 17:34:34.234516   11412 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 17:34:34.234902   11412 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 17:34:34.273295   11412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:34:34.438516   11412 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 17:34:36.976104   11412 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5373513s)
	I0910 17:34:36.985397   11412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 17:34:37.021619   11412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 17:34:37.056363   11412 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 17:34:37.227539   11412 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 17:34:37.386705   11412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:34:37.559442   11412 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 17:34:37.599663   11412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 17:34:37.631785   11412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:34:37.822081   11412 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 17:34:37.929528   11412 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 17:34:37.938532   11412 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 17:34:37.949513   11412 start.go:563] Will wait 60s for crictl version
	I0910 17:34:37.959770   11412 ssh_runner.go:195] Run: which crictl
	I0910 17:34:37.973694   11412 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 17:34:38.019260   11412 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0910 17:34:38.027305   11412 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 17:34:38.067501   11412 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 17:34:38.104220   11412 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0910 17:34:38.104395   11412 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0910 17:34:38.108127   11412 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0910 17:34:38.109118   11412 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0910 17:34:38.109118   11412 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0910 17:34:38.109118   11412 ip.go:211] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:36:e6 Flags:up|broadcast|multicast|running}
	I0910 17:34:38.113112   11412 ip.go:214] interface addr: fe80::bc85:cd58:cadb:2533/64
	I0910 17:34:38.113112   11412 ip.go:214] interface addr: 172.31.208.1/20
	I0910 17:34:38.121113   11412 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0910 17:34:38.127408   11412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:34:38.147497   11412 kubeadm.go:883] updating cluster {Name:addons-218100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.0 ClusterName:addons-218100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.219.103 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 17:34:38.147638   11412 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 17:34:38.153811   11412 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 17:34:38.175369   11412 docker.go:685] Got preloaded images: 
	I0910 17:34:38.175369   11412 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0910 17:34:38.184006   11412 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 17:34:38.209033   11412 ssh_runner.go:195] Run: which lz4
	I0910 17:34:38.224664   11412 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 17:34:38.231193   11412 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 17:34:38.231399   11412 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0910 17:34:39.696367   11412 docker.go:649] duration metric: took 1.4810272s to copy over tarball
	I0910 17:34:39.705753   11412 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 17:34:44.563979   11412 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (4.8577746s)
	I0910 17:34:44.564088   11412 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 17:34:44.622006   11412 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 17:34:44.642624   11412 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0910 17:34:44.690183   11412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:34:44.869599   11412 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 17:34:50.448501   11412 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.5783369s)
	I0910 17:34:50.457158   11412 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 17:34:50.481260   11412 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0910 17:34:50.481260   11412 cache_images.go:84] Images are preloaded, skipping loading
	I0910 17:34:50.481260   11412 kubeadm.go:934] updating node { 172.31.219.103 8443 v1.31.0 docker true true} ...
	I0910 17:34:50.481260   11412 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-218100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.219.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-218100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 17:34:50.488579   11412 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0910 17:34:50.551086   11412 cni.go:84] Creating CNI manager for ""
	I0910 17:34:50.551176   11412 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 17:34:50.551176   11412 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 17:34:50.551262   11412 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.31.219.103 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-218100 NodeName:addons-218100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.31.219.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.31.219.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 17:34:50.551394   11412 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.31.219.103
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-218100"
	  kubeletExtraArgs:
	    node-ip: 172.31.219.103
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.31.219.103"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 17:34:50.559963   11412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 17:34:50.576752   11412 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 17:34:50.586112   11412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 17:34:50.601896   11412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0910 17:34:50.632897   11412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 17:34:50.659930   11412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0910 17:34:50.699735   11412 ssh_runner.go:195] Run: grep 172.31.219.103	control-plane.minikube.internal$ /etc/hosts
	I0910 17:34:50.705580   11412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.219.103	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:34:50.740332   11412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:34:50.915291   11412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:34:50.940311   11412 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100 for IP: 172.31.219.103
	I0910 17:34:50.940311   11412 certs.go:194] generating shared ca certs ...
	I0910 17:34:50.940420   11412 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:34:50.940809   11412 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0910 17:34:51.318019   11412 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt ...
	I0910 17:34:51.318019   11412 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt: {Name:mkecc83abf7dbcd2f2b0fd63bac36f2a7fe554cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:34:51.320021   11412 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key ...
	I0910 17:34:51.320021   11412 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key: {Name:mk56e2872d5c5070a04729e59e76e7398d15f15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:34:51.322021   11412 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0910 17:34:51.864361   11412 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0910 17:34:51.864361   11412 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkfcb9723e08b8d76b8a2e73084c13f930548396 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:34:51.866390   11412 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key ...
	I0910 17:34:51.866390   11412 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key: {Name:mkd23bfd48ce10457a367dee40c81533c5cc7b5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:34:51.867874   11412 certs.go:256] generating profile certs ...
	I0910 17:34:51.868347   11412 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\client.key
	I0910 17:34:51.868347   11412 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\client.crt with IP's: []
	I0910 17:34:51.969640   11412 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\client.crt ...
	I0910 17:34:51.969640   11412 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\client.crt: {Name:mk21b4188dcf10aa0e47d11aaddceed6e9318b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:34:51.970640   11412 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\client.key ...
	I0910 17:34:51.970640   11412 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\client.key: {Name:mkd1dee3a8b033259a647350ea1921008e941c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:34:51.971707   11412 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\apiserver.key.087e837c
	I0910 17:34:51.972791   11412 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\apiserver.crt.087e837c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.219.103]
	I0910 17:34:52.139505   11412 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\apiserver.crt.087e837c ...
	I0910 17:34:52.139505   11412 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\apiserver.crt.087e837c: {Name:mk88668edea81f4b4a0b6864f8a5059dda9d9617 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:34:52.141433   11412 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\apiserver.key.087e837c ...
	I0910 17:34:52.141433   11412 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\apiserver.key.087e837c: {Name:mk80f18d7784b736b57477527a5ff45fe908d2f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:34:52.141966   11412 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\apiserver.crt.087e837c -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\apiserver.crt
	I0910 17:34:52.153165   11412 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\apiserver.key.087e837c -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\apiserver.key
	I0910 17:34:52.156776   11412 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\proxy-client.key
	I0910 17:34:52.157010   11412 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\proxy-client.crt with IP's: []
	I0910 17:34:52.384954   11412 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\proxy-client.crt ...
	I0910 17:34:52.384954   11412 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\proxy-client.crt: {Name:mk1dc543bd1c91eb7980ceddec0d0698d5f87381 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:34:52.385263   11412 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\proxy-client.key ...
	I0910 17:34:52.385263   11412 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\proxy-client.key: {Name:mk447b8e03459e738a88ba72dbf531b88babc8d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:34:52.398783   11412 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0910 17:34:52.399380   11412 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0910 17:34:52.399731   11412 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0910 17:34:52.400100   11412 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0910 17:34:52.402707   11412 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 17:34:52.442779   11412 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0910 17:34:52.482898   11412 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 17:34:52.529002   11412 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 17:34:52.577539   11412 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0910 17:34:52.620960   11412 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 17:34:52.667171   11412 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 17:34:52.709965   11412 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-218100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 17:34:52.752613   11412 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 17:34:52.794261   11412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 17:34:52.830623   11412 ssh_runner.go:195] Run: openssl version
	I0910 17:34:52.848856   11412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 17:34:52.876362   11412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:34:52.882996   11412 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:34:52.892757   11412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:34:52.911092   11412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 17:34:52.939703   11412 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 17:34:52.945795   11412 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 17:34:52.945997   11412 kubeadm.go:392] StartCluster: {Name:addons-218100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:addons-218100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.219.103 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:34:52.953002   11412 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 17:34:52.986355   11412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 17:34:53.012891   11412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 17:34:53.040416   11412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 17:34:53.056448   11412 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 17:34:53.056448   11412 kubeadm.go:157] found existing configuration files:
	
	I0910 17:34:53.064449   11412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 17:34:53.080498   11412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 17:34:53.088404   11412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 17:34:53.113152   11412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 17:34:53.128179   11412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 17:34:53.136157   11412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 17:34:53.161163   11412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 17:34:53.181985   11412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 17:34:53.192162   11412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 17:34:53.217147   11412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 17:34:53.232804   11412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 17:34:53.244840   11412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 17:34:53.262904   11412 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 17:34:53.329423   11412 kubeadm.go:310] W0910 17:34:53.554406    1753 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 17:34:53.330743   11412 kubeadm.go:310] W0910 17:34:53.555249    1753 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 17:34:53.449071   11412 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 17:35:06.426151   11412 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 17:35:06.426151   11412 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 17:35:06.426151   11412 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 17:35:06.426151   11412 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 17:35:06.427931   11412 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 17:35:06.428132   11412 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 17:35:06.430682   11412 out.go:235]   - Generating certificates and keys ...
	I0910 17:35:06.431608   11412 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 17:35:06.431900   11412 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 17:35:06.432171   11412 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 17:35:06.432399   11412 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0910 17:35:06.432495   11412 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0910 17:35:06.432576   11412 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0910 17:35:06.432576   11412 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0910 17:35:06.433371   11412 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-218100 localhost] and IPs [172.31.219.103 127.0.0.1 ::1]
	I0910 17:35:06.433616   11412 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0910 17:35:06.433971   11412 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-218100 localhost] and IPs [172.31.219.103 127.0.0.1 ::1]
	I0910 17:35:06.434281   11412 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 17:35:06.434645   11412 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 17:35:06.434731   11412 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0910 17:35:06.434731   11412 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 17:35:06.434731   11412 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 17:35:06.434731   11412 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 17:35:06.434731   11412 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 17:35:06.434731   11412 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 17:35:06.435257   11412 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 17:35:06.435421   11412 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 17:35:06.435421   11412 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 17:35:06.438255   11412 out.go:235]   - Booting up control plane ...
	I0910 17:35:06.438492   11412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 17:35:06.438763   11412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 17:35:06.438941   11412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 17:35:06.439171   11412 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 17:35:06.439401   11412 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 17:35:06.439519   11412 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 17:35:06.439815   11412 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 17:35:06.440091   11412 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 17:35:06.440253   11412 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.403801ms
	I0910 17:35:06.440418   11412 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 17:35:06.440579   11412 kubeadm.go:310] [api-check] The API server is healthy after 7.002752747s
	I0910 17:35:06.440797   11412 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 17:35:06.441013   11412 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 17:35:06.441216   11412 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 17:35:06.441216   11412 kubeadm.go:310] [mark-control-plane] Marking the node addons-218100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 17:35:06.441216   11412 kubeadm.go:310] [bootstrap-token] Using token: cz3hqi.ttx7ox7tyzgxqklf
	I0910 17:35:06.445015   11412 out.go:235]   - Configuring RBAC rules ...
	I0910 17:35:06.445260   11412 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 17:35:06.445491   11412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 17:35:06.445839   11412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 17:35:06.446181   11412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 17:35:06.446523   11412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 17:35:06.446688   11412 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 17:35:06.446955   11412 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 17:35:06.447060   11412 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 17:35:06.447107   11412 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 17:35:06.447163   11412 kubeadm.go:310] 
	I0910 17:35:06.447269   11412 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 17:35:06.447269   11412 kubeadm.go:310] 
	I0910 17:35:06.447425   11412 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 17:35:06.447425   11412 kubeadm.go:310] 
	I0910 17:35:06.447535   11412 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 17:35:06.447662   11412 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 17:35:06.447781   11412 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 17:35:06.447841   11412 kubeadm.go:310] 
	I0910 17:35:06.448010   11412 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 17:35:06.448010   11412 kubeadm.go:310] 
	I0910 17:35:06.448124   11412 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 17:35:06.448186   11412 kubeadm.go:310] 
	I0910 17:35:06.448346   11412 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 17:35:06.448519   11412 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 17:35:06.448691   11412 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 17:35:06.448691   11412 kubeadm.go:310] 
	I0910 17:35:06.448924   11412 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 17:35:06.449101   11412 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 17:35:06.449157   11412 kubeadm.go:310] 
	I0910 17:35:06.449332   11412 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cz3hqi.ttx7ox7tyzgxqklf \
	I0910 17:35:06.449666   11412 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b \
	I0910 17:35:06.449718   11412 kubeadm.go:310] 	--control-plane 
	I0910 17:35:06.449773   11412 kubeadm.go:310] 
	I0910 17:35:06.449933   11412 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 17:35:06.449989   11412 kubeadm.go:310] 
	I0910 17:35:06.450148   11412 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cz3hqi.ttx7ox7tyzgxqklf \
	I0910 17:35:06.450258   11412 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b 
	I0910 17:35:06.450364   11412 cni.go:84] Creating CNI manager for ""
	I0910 17:35:06.450364   11412 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 17:35:06.452538   11412 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 17:35:06.469058   11412 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 17:35:06.484912   11412 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 17:35:06.516579   11412 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 17:35:06.527039   11412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-218100 minikube.k8s.io/updated_at=2024_09_10T17_35_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=addons-218100 minikube.k8s.io/primary=true
	I0910 17:35:06.530585   11412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:35:06.541715   11412 ops.go:34] apiserver oom_adj: -16
	I0910 17:35:06.687193   11412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:35:07.198826   11412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:35:07.698429   11412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:35:08.185678   11412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:35:08.690808   11412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:35:09.190951   11412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:35:09.696925   11412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:35:10.199410   11412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:35:10.690172   11412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:35:10.805977   11412 kubeadm.go:1113] duration metric: took 4.2891096s to wait for elevateKubeSystemPrivileges
	I0910 17:35:10.805977   11412 kubeadm.go:394] duration metric: took 17.8587794s to StartCluster
	I0910 17:35:10.805977   11412 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:35:10.806709   11412 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 17:35:10.807341   11412 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:35:10.808669   11412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0910 17:35:10.808669   11412 start.go:235] Will wait 6m0s for node &{Name: IP:172.31.219.103 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 17:35:10.809212   11412 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0910 17:35:10.809403   11412 addons.go:69] Setting yakd=true in profile "addons-218100"
	I0910 17:35:10.809403   11412 addons.go:234] Setting addon yakd=true in "addons-218100"
	I0910 17:35:10.809467   11412 addons.go:69] Setting gcp-auth=true in profile "addons-218100"
	I0910 17:35:10.809467   11412 mustload.go:65] Loading cluster: addons-218100
	I0910 17:35:10.809635   11412 host.go:66] Checking if "addons-218100" exists ...
	I0910 17:35:10.809682   11412 config.go:182] Loaded profile config "addons-218100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 17:35:10.809682   11412 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-218100"
	I0910 17:35:10.809682   11412 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-218100"
	I0910 17:35:10.809682   11412 host.go:66] Checking if "addons-218100" exists ...
	I0910 17:35:10.809682   11412 addons.go:69] Setting ingress-dns=true in profile "addons-218100"
	I0910 17:35:10.809682   11412 addons.go:234] Setting addon ingress-dns=true in "addons-218100"
	I0910 17:35:10.809682   11412 host.go:66] Checking if "addons-218100" exists ...
	I0910 17:35:10.809682   11412 addons.go:69] Setting helm-tiller=true in profile "addons-218100"
	I0910 17:35:10.810227   11412 config.go:182] Loaded profile config "addons-218100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 17:35:10.810227   11412 addons.go:234] Setting addon helm-tiller=true in "addons-218100"
	I0910 17:35:10.810354   11412 host.go:66] Checking if "addons-218100" exists ...
	I0910 17:35:10.810354   11412 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-218100"
	I0910 17:35:10.811202   11412 addons.go:69] Setting storage-provisioner=true in profile "addons-218100"
	I0910 17:35:10.811202   11412 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-218100"
	I0910 17:35:10.811307   11412 addons.go:234] Setting addon storage-provisioner=true in "addons-218100"
	I0910 17:35:10.811378   11412 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-218100"
	I0910 17:35:10.811529   11412 host.go:66] Checking if "addons-218100" exists ...
	I0910 17:35:10.811812   11412 addons.go:69] Setting default-storageclass=true in profile "addons-218100"
	I0910 17:35:10.811812   11412 out.go:177] * Verifying Kubernetes components...
	I0910 17:35:10.811812   11412 addons.go:69] Setting volcano=true in profile "addons-218100"
	I0910 17:35:10.811812   11412 addons.go:69] Setting cloud-spanner=true in profile "addons-218100"
	I0910 17:35:10.811978   11412 addons.go:234] Setting addon volcano=true in "addons-218100"
	I0910 17:35:10.811978   11412 addons.go:234] Setting addon cloud-spanner=true in "addons-218100"
	I0910 17:35:10.811978   11412 addons.go:69] Setting volumesnapshots=true in profile "addons-218100"
	I0910 17:35:10.812152   11412 addons.go:234] Setting addon volumesnapshots=true in "addons-218100"
	I0910 17:35:10.812152   11412 host.go:66] Checking if "addons-218100" exists ...
	I0910 17:35:10.812152   11412 host.go:66] Checking if "addons-218100" exists ...
	I0910 17:35:10.812296   11412 addons.go:69] Setting metrics-server=true in profile "addons-218100"
	I0910 17:35:10.812362   11412 addons.go:234] Setting addon metrics-server=true in "addons-218100"
	I0910 17:35:10.812362   11412 host.go:66] Checking if "addons-218100" exists ...
	I0910 17:35:10.810580   11412 addons.go:69] Setting ingress=true in profile "addons-218100"
	I0910 17:35:10.812571   11412 host.go:66] Checking if "addons-218100" exists ...
	I0910 17:35:10.812625   11412 addons.go:234] Setting addon ingress=true in "addons-218100"
	I0910 17:35:10.812737   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:10.812837   11412 host.go:66] Checking if "addons-218100" exists ...
	I0910 17:35:10.812837   11412 host.go:66] Checking if "addons-218100" exists ...
	I0910 17:35:10.810580   11412 addons.go:69] Setting registry=true in profile "addons-218100"
	I0910 17:35:10.813482   11412 addons.go:234] Setting addon registry=true in "addons-218100"
	I0910 17:35:10.811812   11412 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-218100"
	I0910 17:35:10.813615   11412 host.go:66] Checking if "addons-218100" exists ...
	I0910 17:35:10.813670   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:10.811812   11412 addons.go:69] Setting inspektor-gadget=true in profile "addons-218100"
	I0910 17:35:10.811202   11412 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-218100"
	I0910 17:35:10.814541   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:10.814740   11412 addons.go:234] Setting addon inspektor-gadget=true in "addons-218100"
	I0910 17:35:10.814924   11412 host.go:66] Checking if "addons-218100" exists ...
	I0910 17:35:10.816314   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:10.818279   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:10.819589   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:10.820127   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:10.820681   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:10.821632   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:10.822431   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:10.823731   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:10.825566   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:10.826576   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:10.826576   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:10.827772   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:10.830572   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:10.834572   11412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:35:11.512557   11412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.31.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0910 17:35:11.877330   11412 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.0426884s)
	I0910 17:35:11.903320   11412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:35:13.701057   11412 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.31.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.1873801s)
	I0910 17:35:13.701057   11412 start.go:971] {"host.minikube.internal": 172.31.208.1} host record injected into CoreDNS's ConfigMap
	I0910 17:35:13.714079   11412 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.8106381s)
	I0910 17:35:13.716081   11412 node_ready.go:35] waiting up to 6m0s for node "addons-218100" to be "Ready" ...
	I0910 17:35:14.202665   11412 node_ready.go:49] node "addons-218100" has status "Ready":"True"
	I0910 17:35:14.202665   11412 node_ready.go:38] duration metric: took 486.5513ms for node "addons-218100" to be "Ready" ...
	I0910 17:35:14.202665   11412 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:35:14.305988   11412 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace to be "Ready" ...
	I0910 17:35:14.825934   11412 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-218100" context rescaled to 1 replicas
	I0910 17:35:16.404996   11412 pod_ready.go:103] pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace has status "Ready":"False"
	I0910 17:35:16.702609   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:16.702609   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:16.703588   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:16.703588   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:16.704602   11412 addons.go:234] Setting addon default-storageclass=true in "addons-218100"
	I0910 17:35:16.705595   11412 host.go:66] Checking if "addons-218100" exists ...
	I0910 17:35:16.706600   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:16.708620   11412 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0910 17:35:16.711432   11412 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0910 17:35:16.711432   11412 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0910 17:35:16.711432   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:16.713666   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:16.713666   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:16.717743   11412 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0910 17:35:16.721728   11412 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 17:35:16.721728   11412 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 17:35:16.721728   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:16.866701   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:16.866701   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:16.871713   11412 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-218100"
	I0910 17:35:16.871713   11412 host.go:66] Checking if "addons-218100" exists ...
	I0910 17:35:16.873704   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:16.892440   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:16.892440   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:16.904518   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:16.904518   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:16.908702   11412 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0910 17:35:16.946920   11412 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0910 17:35:16.972869   11412 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0910 17:35:16.974939   11412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0910 17:35:16.974939   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:16.977918   11412 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0910 17:35:16.977918   11412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0910 17:35:16.977918   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:17.033231   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:17.033231   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:17.039223   11412 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 17:35:17.050232   11412 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 17:35:17.058254   11412 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0910 17:35:17.064214   11412 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0910 17:35:17.064214   11412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0910 17:35:17.064214   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:17.168596   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:17.168596   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:17.177596   11412 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0910 17:35:17.187612   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:17.187612   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:17.187612   11412 host.go:66] Checking if "addons-218100" exists ...
	I0910 17:35:17.242490   11412 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0910 17:35:17.283376   11412 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0910 17:35:17.290509   11412 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0910 17:35:17.294489   11412 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0910 17:35:17.308628   11412 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0910 17:35:17.308628   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:17.308628   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:17.317497   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:17.323811   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:17.327812   11412 out.go:177]   - Using image docker.io/registry:2.8.3
	I0910 17:35:17.334057   11412 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0910 17:35:17.359799   11412 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0910 17:35:17.362496   11412 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0910 17:35:17.362496   11412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0910 17:35:17.362496   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:17.364496   11412 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0910 17:35:17.370515   11412 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0910 17:35:17.388777   11412 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0910 17:35:17.388777   11412 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0910 17:35:17.388777   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:17.401768   11412 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0910 17:35:17.401768   11412 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0910 17:35:17.401768   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:17.495884   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:17.495884   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:17.499599   11412 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 17:35:17.502920   11412 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 17:35:17.502920   11412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 17:35:17.502920   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:17.508499   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:17.508499   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:17.515504   11412 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0910 17:35:17.525521   11412 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0910 17:35:17.525521   11412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0910 17:35:17.526524   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:17.783337   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:17.783337   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:17.786344   11412 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0910 17:35:17.796388   11412 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0910 17:35:17.796388   11412 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0910 17:35:17.797351   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:17.926509   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:17.926509   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:17.946078   11412 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0910 17:35:17.956369   11412 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0910 17:35:17.956369   11412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0910 17:35:17.956369   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:18.609015   11412 pod_ready.go:103] pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace has status "Ready":"False"
	I0910 17:35:18.700323   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:18.700323   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:18.707666   11412 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0910 17:35:18.710571   11412 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0910 17:35:18.713313   11412 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0910 17:35:18.719350   11412 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0910 17:35:18.719350   11412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0910 17:35:18.719350   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:21.029560   11412 pod_ready.go:103] pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace has status "Ready":"False"
	I0910 17:35:22.094660   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:22.094660   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:22.094660   11412 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 17:35:22.094660   11412 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 17:35:22.094660   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:22.316919   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:22.317907   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:22.317907   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:35:22.460837   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:22.460837   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:22.465475   11412 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0910 17:35:22.496568   11412 out.go:177]   - Using image docker.io/busybox:stable
	I0910 17:35:22.512175   11412 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0910 17:35:22.512175   11412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0910 17:35:22.512175   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:22.723816   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:22.723816   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:22.723816   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:35:22.739972   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:22.739972   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:22.739972   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:35:22.748642   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:22.748642   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:22.758988   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:35:22.786069   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:22.786069   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:22.786069   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:35:23.180440   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:23.180440   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:23.180440   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:35:23.213524   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:23.213524   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:23.213524   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:35:23.219547   11412 pod_ready.go:103] pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace has status "Ready":"False"
	I0910 17:35:23.289999   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:23.290059   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:23.290167   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:35:24.007897   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:24.007897   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:24.007897   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:35:24.238690   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:24.238690   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:24.238690   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:35:24.618223   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:24.618371   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:24.618371   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:35:25.010523   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:25.010523   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:25.011520   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:35:25.470723   11412 pod_ready.go:103] pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace has status "Ready":"False"
	I0910 17:35:25.744786   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:25.744786   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:25.744786   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:35:25.755249   11412 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0910 17:35:25.755249   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:27.836609   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:27.837570   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:27.838325   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:35:27.838325   11412 pod_ready.go:103] pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace has status "Ready":"False"
	I0910 17:35:28.210423   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:28.210423   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:28.210423   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:35:29.057420   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:35:29.057420   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:29.058047   11412 sshutil.go:53] new ssh client: &{IP:172.31.219.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\id_rsa Username:docker}
	I0910 17:35:29.198569   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:35:29.198569   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:29.199565   11412 sshutil.go:53] new ssh client: &{IP:172.31.219.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\id_rsa Username:docker}
	I0910 17:35:29.242704   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:35:29.242704   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:29.242704   11412 sshutil.go:53] new ssh client: &{IP:172.31.219.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\id_rsa Username:docker}
	I0910 17:35:29.375100   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:35:29.375100   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:29.375100   11412 sshutil.go:53] new ssh client: &{IP:172.31.219.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\id_rsa Username:docker}
	I0910 17:35:29.449839   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:35:29.449839   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:29.450487   11412 sshutil.go:53] new ssh client: &{IP:172.31.219.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\id_rsa Username:docker}
	I0910 17:35:29.486147   11412 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0910 17:35:29.486147   11412 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0910 17:35:29.573569   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:35:29.573569   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:29.574562   11412 sshutil.go:53] new ssh client: &{IP:172.31.219.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\id_rsa Username:docker}
	I0910 17:35:29.607246   11412 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0910 17:35:29.607246   11412 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0910 17:35:29.635719   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:35:29.635719   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:29.636226   11412 sshutil.go:53] new ssh client: &{IP:172.31.219.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\id_rsa Username:docker}
	I0910 17:35:29.695320   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:35:29.695320   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:29.696197   11412 sshutil.go:53] new ssh client: &{IP:172.31.219.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\id_rsa Username:docker}
	I0910 17:35:29.707890   11412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0910 17:35:29.714885   11412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0910 17:35:29.750892   11412 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0910 17:35:29.750892   11412 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0910 17:35:29.774972   11412 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 17:35:29.774972   11412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0910 17:35:29.908197   11412 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0910 17:35:29.908197   11412 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0910 17:35:29.937900   11412 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0910 17:35:29.937900   11412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0910 17:35:29.967931   11412 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 17:35:29.967931   11412 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 17:35:29.975647   11412 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0910 17:35:29.975647   11412 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0910 17:35:30.075302   11412 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0910 17:35:30.075403   11412 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0910 17:35:30.079864   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:30.079864   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:30.079864   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:35:30.087197   11412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 17:35:30.107210   11412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0910 17:35:30.135263   11412 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 17:35:30.135263   11412 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 17:35:30.176410   11412 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0910 17:35:30.176550   11412 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0910 17:35:30.205411   11412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0910 17:35:30.224422   11412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0910 17:35:30.321832   11412 pod_ready.go:103] pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace has status "Ready":"False"
	I0910 17:35:30.355444   11412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 17:35:30.362485   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:35:30.362485   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:30.362485   11412 sshutil.go:53] new ssh client: &{IP:172.31.219.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\id_rsa Username:docker}
	I0910 17:35:30.421266   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:35:30.421576   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:30.421641   11412 sshutil.go:53] new ssh client: &{IP:172.31.219.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\id_rsa Username:docker}
	I0910 17:35:30.476335   11412 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0910 17:35:30.476417   11412 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0910 17:35:30.477403   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:35:30.477403   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:30.477403   11412 sshutil.go:53] new ssh client: &{IP:172.31.219.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\id_rsa Username:docker}
	I0910 17:35:30.549882   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:35:30.549882   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:30.553543   11412 sshutil.go:53] new ssh client: &{IP:172.31.219.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\id_rsa Username:docker}
	I0910 17:35:30.700621   11412 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0910 17:35:30.700621   11412 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0910 17:35:30.711374   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:35:30.711374   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:30.713572   11412 sshutil.go:53] new ssh client: &{IP:172.31.219.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\id_rsa Username:docker}
	I0910 17:35:30.864891   11412 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:35:30.864891   11412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0910 17:35:30.984838   11412 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0910 17:35:30.984838   11412 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0910 17:35:31.106927   11412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0910 17:35:31.112691   11412 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0910 17:35:31.112691   11412 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0910 17:35:31.143700   11412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:35:31.261376   11412 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0910 17:35:31.261542   11412 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0910 17:35:31.308789   11412 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0910 17:35:31.309322   11412 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0910 17:35:31.472583   11412 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0910 17:35:31.472739   11412 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0910 17:35:31.523445   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:35:31.523445   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:31.523598   11412 sshutil.go:53] new ssh client: &{IP:172.31.219.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\id_rsa Username:docker}
	I0910 17:35:31.534387   11412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.8263743s)
	I0910 17:35:31.571704   11412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0910 17:35:31.625779   11412 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0910 17:35:31.625779   11412 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0910 17:35:31.632832   11412 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0910 17:35:31.632832   11412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0910 17:35:31.643313   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:35:31.643313   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:31.643313   11412 sshutil.go:53] new ssh client: &{IP:172.31.219.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\id_rsa Username:docker}
	I0910 17:35:31.804692   11412 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0910 17:35:31.804692   11412 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0910 17:35:31.877747   11412 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0910 17:35:31.877747   11412 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0910 17:35:31.941530   11412 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0910 17:35:31.941530   11412 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0910 17:35:31.949610   11412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0910 17:35:32.033143   11412 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0910 17:35:32.033143   11412 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0910 17:35:32.157459   11412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 17:35:32.215475   11412 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0910 17:35:32.215475   11412 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0910 17:35:32.328434   11412 pod_ready.go:103] pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace has status "Ready":"False"
	I0910 17:35:32.418838   11412 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0910 17:35:32.418838   11412 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0910 17:35:32.547598   11412 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0910 17:35:32.547598   11412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0910 17:35:32.552468   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:35:32.552468   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:32.552576   11412 sshutil.go:53] new ssh client: &{IP:172.31.219.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\id_rsa Username:docker}
	I0910 17:35:32.565097   11412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0910 17:35:32.814153   11412 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0910 17:35:32.814153   11412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0910 17:35:32.940861   11412 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0910 17:35:32.940861   11412 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0910 17:35:33.243557   11412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0910 17:35:33.276424   11412 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0910 17:35:33.306294   11412 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0910 17:35:33.306294   11412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0910 17:35:34.117577   11412 addons.go:234] Setting addon gcp-auth=true in "addons-218100"
	I0910 17:35:34.117716   11412 host.go:66] Checking if "addons-218100" exists ...
	I0910 17:35:34.118708   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:34.280617   11412 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0910 17:35:34.280617   11412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0910 17:35:34.366465   11412 pod_ready.go:103] pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace has status "Ready":"False"
	I0910 17:35:35.067819   11412 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0910 17:35:35.067930   11412 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0910 17:35:35.528335   11412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0910 17:35:36.133648   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:36.133648   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:36.141641   11412 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0910 17:35:36.141641   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-218100 ).state
	I0910 17:35:36.486291   11412 pod_ready.go:103] pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace has status "Ready":"False"
	I0910 17:35:38.115314   11412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 17:35:38.115314   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:38.115617   11412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-218100 ).networkadapters[0]).ipaddresses[0]
	I0910 17:35:38.922535   11412 pod_ready.go:103] pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace has status "Ready":"False"
	I0910 17:35:40.599164   11412 main.go:141] libmachine: [stdout =====>] : 172.31.219.103
	
	I0910 17:35:40.599164   11412 main.go:141] libmachine: [stderr =====>] : 
	I0910 17:35:40.599164   11412 sshutil.go:53] new ssh client: &{IP:172.31.219.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-218100\id_rsa Username:docker}
	I0910 17:35:40.858383   11412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.1427507s)
	I0910 17:35:40.858383   11412 addons.go:475] Verifying addon ingress=true in "addons-218100"
	I0910 17:35:40.858383   11412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.7504524s)
	I0910 17:35:40.858383   11412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.7704632s)
	I0910 17:35:40.858383   11412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.6522572s)
	I0910 17:35:40.858383   11412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.6332475s)
	I0910 17:35:40.858383   11412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.5022343s)
	I0910 17:35:40.861275   11412 addons.go:475] Verifying addon metrics-server=true in "addons-218100"
	I0910 17:35:40.859345   11412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.7508021s)
	I0910 17:35:40.859543   11412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.7151906s)
	W0910 17:35:40.861275   11412 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0910 17:35:40.861275   11412 out.go:177] * Verifying ingress addon...
	I0910 17:35:40.861275   11412 retry.go:31] will retry after 313.612677ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0910 17:35:40.864994   11412 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-218100 service yakd-dashboard -n yakd-dashboard
	
	I0910 17:35:40.869376   11412 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0910 17:35:40.899651   11412 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0910 17:35:40.899680   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:41.195327   11412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:35:41.329498   11412 pod_ready.go:103] pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace has status "Ready":"False"
	I0910 17:35:41.388821   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:41.892681   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:42.397372   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:42.894270   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:43.420775   11412 pod_ready.go:103] pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace has status "Ready":"False"
	I0910 17:35:43.515054   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:43.896025   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:44.456916   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:44.788466   11412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (13.2158747s)
	I0910 17:35:44.788630   11412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (12.8380708s)
	I0910 17:35:44.788630   11412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (12.6303231s)
	I0910 17:35:44.788630   11412 addons.go:475] Verifying addon registry=true in "addons-218100"
	I0910 17:35:44.788916   11412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (11.5445846s)
	I0910 17:35:44.788831   11412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (12.2228492s)
	I0910 17:35:44.793155   11412 out.go:177] * Verifying registry addon...
	I0910 17:35:44.798039   11412 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0910 17:35:44.849875   11412 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0910 17:35:44.849875   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:44.962506   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:45.317020   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:45.417391   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:45.823490   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:45.826091   11412 pod_ready.go:103] pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace has status "Ready":"False"
	I0910 17:35:45.850745   11412 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (9.7074707s)
	I0910 17:35:45.850745   11412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (10.3207348s)
	I0910 17:35:45.850745   11412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.655106s)
	I0910 17:35:45.850745   11412 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-218100"
	I0910 17:35:45.853761   11412 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 17:35:45.855733   11412 out.go:177] * Verifying csi-hostpath-driver addon...
	I0910 17:35:45.866763   11412 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0910 17:35:45.866763   11412 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0910 17:35:45.870763   11412 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0910 17:35:45.870763   11412 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0910 17:35:45.941773   11412 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0910 17:35:45.941773   11412 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0910 17:35:45.949745   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:45.949829   11412 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0910 17:35:45.949829   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:46.004825   11412 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0910 17:35:46.004825   11412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0910 17:35:46.060673   11412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0910 17:35:46.309005   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:46.373916   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:46.383046   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:46.842229   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:46.970015   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:46.976902   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:46.988014   11412 addons.go:475] Verifying addon gcp-auth=true in "addons-218100"
	I0910 17:35:46.993031   11412 out.go:177] * Verifying gcp-auth addon...
	I0910 17:35:46.998022   11412 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0910 17:35:47.040642   11412 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0910 17:35:47.311027   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:47.412179   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:47.412179   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:47.817319   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:47.831143   11412 pod_ready.go:103] pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace has status "Ready":"False"
	I0910 17:35:47.879995   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:47.880585   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:48.309028   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:48.387947   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:48.388122   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:48.809532   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:48.873493   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:48.878360   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:49.334708   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:49.386558   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:49.388157   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:49.815886   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:49.873441   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:49.878991   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:50.317079   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:50.321906   11412 pod_ready.go:103] pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace has status "Ready":"False"
	I0910 17:35:50.381010   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:50.381010   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:50.809518   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:50.888217   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:50.888217   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:51.315783   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:51.382877   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:51.382877   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:51.808282   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:51.891469   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:51.891850   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:52.315498   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:52.381262   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:52.382085   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:52.808546   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:52.816070   11412 pod_ready.go:103] pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace has status "Ready":"False"
	I0910 17:35:52.888474   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:52.888719   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:53.316039   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:53.380123   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:53.381551   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:53.805897   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:53.887985   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:53.888912   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:54.304128   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:54.379791   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:54.384953   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:54.821997   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:54.821997   11412 pod_ready.go:103] pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace has status "Ready":"False"
	I0910 17:35:54.884710   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:54.886007   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:55.310208   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:55.379892   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:55.381882   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:55.821021   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:55.882474   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:55.882474   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:56.309885   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:56.373912   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:56.379043   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:56.824244   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:56.828862   11412 pod_ready.go:103] pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace has status "Ready":"False"
	I0910 17:35:56.891849   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:56.894045   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:57.308089   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:57.327950   11412 pod_ready.go:93] pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace has status "Ready":"True"
	I0910 17:35:57.327950   11412 pod_ready.go:82] duration metric: took 43.0190759s for pod "coredns-6f6b679f8f-2mg7w" in "kube-system" namespace to be "Ready" ...
	I0910 17:35:57.328007   11412 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vt6vr" in "kube-system" namespace to be "Ready" ...
	I0910 17:35:57.330860   11412 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-vt6vr" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-vt6vr" not found
	I0910 17:35:57.330932   11412 pod_ready.go:82] duration metric: took 2.8523ms for pod "coredns-6f6b679f8f-vt6vr" in "kube-system" namespace to be "Ready" ...
	E0910 17:35:57.330932   11412 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-vt6vr" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-vt6vr" not found
	I0910 17:35:57.330932   11412 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-218100" in "kube-system" namespace to be "Ready" ...
	I0910 17:35:57.338339   11412 pod_ready.go:93] pod "etcd-addons-218100" in "kube-system" namespace has status "Ready":"True"
	I0910 17:35:57.338408   11412 pod_ready.go:82] duration metric: took 7.4758ms for pod "etcd-addons-218100" in "kube-system" namespace to be "Ready" ...
	I0910 17:35:57.338408   11412 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-218100" in "kube-system" namespace to be "Ready" ...
	I0910 17:35:57.344697   11412 pod_ready.go:93] pod "kube-apiserver-addons-218100" in "kube-system" namespace has status "Ready":"True"
	I0910 17:35:57.344697   11412 pod_ready.go:82] duration metric: took 6.2883ms for pod "kube-apiserver-addons-218100" in "kube-system" namespace to be "Ready" ...
	I0910 17:35:57.344757   11412 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-218100" in "kube-system" namespace to be "Ready" ...
	I0910 17:35:57.350319   11412 pod_ready.go:93] pod "kube-controller-manager-addons-218100" in "kube-system" namespace has status "Ready":"True"
	I0910 17:35:57.350375   11412 pod_ready.go:82] duration metric: took 5.6174ms for pod "kube-controller-manager-addons-218100" in "kube-system" namespace to be "Ready" ...
	I0910 17:35:57.350375   11412 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2mqnm" in "kube-system" namespace to be "Ready" ...
	I0910 17:35:57.384430   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:57.387110   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:57.523774   11412 pod_ready.go:93] pod "kube-proxy-2mqnm" in "kube-system" namespace has status "Ready":"True"
	I0910 17:35:57.523827   11412 pod_ready.go:82] duration metric: took 173.4401ms for pod "kube-proxy-2mqnm" in "kube-system" namespace to be "Ready" ...
	I0910 17:35:57.523827   11412 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-218100" in "kube-system" namespace to be "Ready" ...
	I0910 17:35:57.959291   11412 pod_ready.go:93] pod "kube-scheduler-addons-218100" in "kube-system" namespace has status "Ready":"True"
	I0910 17:35:57.959351   11412 pod_ready.go:82] duration metric: took 435.4946ms for pod "kube-scheduler-addons-218100" in "kube-system" namespace to be "Ready" ...
	I0910 17:35:57.959351   11412 pod_ready.go:39] duration metric: took 43.7537505s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:35:57.959464   11412 api_server.go:52] waiting for apiserver process to appear ...
	I0910 17:35:57.969317   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:57.970207   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:57.970474   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:57.975968   11412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:35:58.004016   11412 api_server.go:72] duration metric: took 47.1921805s to wait for apiserver process to appear ...
	I0910 17:35:58.004016   11412 api_server.go:88] waiting for apiserver healthz status ...
	I0910 17:35:58.004095   11412 api_server.go:253] Checking apiserver healthz at https://172.31.219.103:8443/healthz ...
	I0910 17:35:58.101102   11412 api_server.go:279] https://172.31.219.103:8443/healthz returned 200:
	ok
	I0910 17:35:58.103057   11412 api_server.go:141] control plane version: v1.31.0
	I0910 17:35:58.103160   11412 api_server.go:131] duration metric: took 99.0587ms to wait for apiserver health ...
	I0910 17:35:58.103160   11412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 17:35:58.141208   11412 system_pods.go:59] 18 kube-system pods found
	I0910 17:35:58.141208   11412 system_pods.go:61] "coredns-6f6b679f8f-2mg7w" [2fab1c58-f796-47e5-bfc9-fdeff7ade250] Running
	I0910 17:35:58.141208   11412 system_pods.go:61] "csi-hostpath-attacher-0" [a760e33d-8ba7-4824-a883-80a6a076867f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0910 17:35:58.141208   11412 system_pods.go:61] "csi-hostpath-resizer-0" [aeff3c36-a66e-4d5c-aaad-bc4ccbe96b84] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0910 17:35:58.141208   11412 system_pods.go:61] "csi-hostpathplugin-msfsb" [d47a7dc4-3214-4cf3-a570-3e1f3ce7b396] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0910 17:35:58.141208   11412 system_pods.go:61] "etcd-addons-218100" [2486f822-eee0-4291-aa94-c092704e5453] Running
	I0910 17:35:58.141208   11412 system_pods.go:61] "kube-apiserver-addons-218100" [c4648881-1313-464a-95ab-eeadf70be7e5] Running
	I0910 17:35:58.141208   11412 system_pods.go:61] "kube-controller-manager-addons-218100" [3a021570-2ea1-4510-b97f-76f9701c670a] Running
	I0910 17:35:58.141208   11412 system_pods.go:61] "kube-ingress-dns-minikube" [3ec25369-18fc-435e-8379-bce5dc2f30c4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0910 17:35:58.141208   11412 system_pods.go:61] "kube-proxy-2mqnm" [632b6358-714c-4749-a8b4-5e62215cdc8d] Running
	I0910 17:35:58.141208   11412 system_pods.go:61] "kube-scheduler-addons-218100" [6219aea0-5628-436d-b54c-35c47d1480fe] Running
	I0910 17:35:58.141208   11412 system_pods.go:61] "metrics-server-84c5f94fbc-8fd2v" [87c58652-e39f-47e5-a59b-098bd940d727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 17:35:58.141208   11412 system_pods.go:61] "nvidia-device-plugin-daemonset-zpcgf" [b805867a-0e07-4d26-b1f5-ec0f6d63bc5e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0910 17:35:58.141208   11412 system_pods.go:61] "registry-66c9cd494c-jj9rm" [347bd0df-17e0-49d7-9fea-54ec3391c705] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0910 17:35:58.141208   11412 system_pods.go:61] "registry-proxy-b65qr" [8975ac76-f502-459b-8189-457e92575e4b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0910 17:35:58.141208   11412 system_pods.go:61] "snapshot-controller-56fcc65765-bxrht" [8c9d3e75-de7f-44a8-b984-573cc95bb17f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:35:58.141208   11412 system_pods.go:61] "snapshot-controller-56fcc65765-c2dv6" [34dc1dfb-829b-4539-ab51-449f933d272a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:35:58.141208   11412 system_pods.go:61] "storage-provisioner" [13139bd0-c988-4e81-b389-5444ccfa9e0b] Running
	I0910 17:35:58.141208   11412 system_pods.go:61] "tiller-deploy-b48cc5f79-ktnmd" [0dc5e187-9a58-4a0e-bef0-b7bca50b8188] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0910 17:35:58.141208   11412 system_pods.go:74] duration metric: took 38.0452ms to wait for pod list to return data ...
	I0910 17:35:58.141743   11412 default_sa.go:34] waiting for default service account to be created ...
	I0910 17:35:58.351286   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:58.352986   11412 default_sa.go:45] found service account: "default"
	I0910 17:35:58.352986   11412 default_sa.go:55] duration metric: took 211.229ms for default service account to be created ...
	I0910 17:35:58.352986   11412 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 17:35:58.385597   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:58.388546   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:58.535925   11412 system_pods.go:86] 18 kube-system pods found
	I0910 17:35:58.536478   11412 system_pods.go:89] "coredns-6f6b679f8f-2mg7w" [2fab1c58-f796-47e5-bfc9-fdeff7ade250] Running
	I0910 17:35:58.536478   11412 system_pods.go:89] "csi-hostpath-attacher-0" [a760e33d-8ba7-4824-a883-80a6a076867f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0910 17:35:58.536526   11412 system_pods.go:89] "csi-hostpath-resizer-0" [aeff3c36-a66e-4d5c-aaad-bc4ccbe96b84] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0910 17:35:58.536526   11412 system_pods.go:89] "csi-hostpathplugin-msfsb" [d47a7dc4-3214-4cf3-a570-3e1f3ce7b396] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0910 17:35:58.536526   11412 system_pods.go:89] "etcd-addons-218100" [2486f822-eee0-4291-aa94-c092704e5453] Running
	I0910 17:35:58.536526   11412 system_pods.go:89] "kube-apiserver-addons-218100" [c4648881-1313-464a-95ab-eeadf70be7e5] Running
	I0910 17:35:58.536526   11412 system_pods.go:89] "kube-controller-manager-addons-218100" [3a021570-2ea1-4510-b97f-76f9701c670a] Running
	I0910 17:35:58.536526   11412 system_pods.go:89] "kube-ingress-dns-minikube" [3ec25369-18fc-435e-8379-bce5dc2f30c4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0910 17:35:58.536526   11412 system_pods.go:89] "kube-proxy-2mqnm" [632b6358-714c-4749-a8b4-5e62215cdc8d] Running
	I0910 17:35:58.536526   11412 system_pods.go:89] "kube-scheduler-addons-218100" [6219aea0-5628-436d-b54c-35c47d1480fe] Running
	I0910 17:35:58.536526   11412 system_pods.go:89] "metrics-server-84c5f94fbc-8fd2v" [87c58652-e39f-47e5-a59b-098bd940d727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 17:35:58.536526   11412 system_pods.go:89] "nvidia-device-plugin-daemonset-zpcgf" [b805867a-0e07-4d26-b1f5-ec0f6d63bc5e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0910 17:35:58.536526   11412 system_pods.go:89] "registry-66c9cd494c-jj9rm" [347bd0df-17e0-49d7-9fea-54ec3391c705] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0910 17:35:58.536526   11412 system_pods.go:89] "registry-proxy-b65qr" [8975ac76-f502-459b-8189-457e92575e4b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0910 17:35:58.536526   11412 system_pods.go:89] "snapshot-controller-56fcc65765-bxrht" [8c9d3e75-de7f-44a8-b984-573cc95bb17f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:35:58.536526   11412 system_pods.go:89] "snapshot-controller-56fcc65765-c2dv6" [34dc1dfb-829b-4539-ab51-449f933d272a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:35:58.536526   11412 system_pods.go:89] "storage-provisioner" [13139bd0-c988-4e81-b389-5444ccfa9e0b] Running
	I0910 17:35:58.536526   11412 system_pods.go:89] "tiller-deploy-b48cc5f79-ktnmd" [0dc5e187-9a58-4a0e-bef0-b7bca50b8188] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0910 17:35:58.536526   11412 system_pods.go:126] duration metric: took 183.5274ms to wait for k8s-apps to be running ...
	I0910 17:35:58.536526   11412 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 17:35:58.544404   11412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:35:58.568837   11412 system_svc.go:56] duration metric: took 32.3087ms WaitForService to wait for kubelet
	I0910 17:35:58.568837   11412 kubeadm.go:582] duration metric: took 47.7569639s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:35:58.568837   11412 node_conditions.go:102] verifying NodePressure condition ...
	I0910 17:35:58.730108   11412 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 17:35:58.730108   11412 node_conditions.go:123] node cpu capacity is 2
	I0910 17:35:58.730108   11412 node_conditions.go:105] duration metric: took 161.26ms to run NodePressure ...
	I0910 17:35:58.730672   11412 start.go:241] waiting for startup goroutines ...
	I0910 17:35:58.814726   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:58.873346   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:58.878521   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:59.322443   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:59.382848   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:59.391295   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:35:59.804626   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:35:59.885951   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:35:59.886456   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:00.309406   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:00.387969   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:00.388049   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:00.815425   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:00.881408   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:00.882002   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:01.307219   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:01.388375   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:01.389550   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:01.811363   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:01.875783   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:01.878077   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:03.265373   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:03.266086   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:03.266086   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:03.271074   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:03.271205   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:03.273729   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:03.304477   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:03.386181   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:03.386181   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:03.818499   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:03.881502   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:03.881502   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:04.312491   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:04.377433   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:04.381169   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:04.824903   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:04.926253   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:04.927855   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:05.309029   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:05.389896   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:05.393109   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:05.814720   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:05.879712   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:05.884231   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:06.307826   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:06.392335   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:06.393447   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:06.816997   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:06.881366   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:06.883038   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:07.309092   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:07.389126   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:07.389511   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:08.309286   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:08.410444   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:08.411914   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:08.416450   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:08.511847   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:08.512145   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:08.808349   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:08.890956   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:08.894324   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:09.317944   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:09.384575   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:09.384977   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:09.807730   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:09.889382   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:09.890528   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:10.314235   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:10.382568   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:10.383987   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:10.819806   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:10.886732   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:10.887387   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:11.312794   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:11.377872   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:11.378417   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:11.805495   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:11.884818   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:11.885007   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:12.310132   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:12.391025   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:12.391757   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:12.816621   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:12.883087   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:12.883632   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:13.305893   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:13.387800   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:13.388321   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:13.808057   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:13.888142   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:13.888197   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:14.306415   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:14.386055   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:14.386322   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:14.908471   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:14.909475   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:14.911489   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:15.307257   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:15.386063   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:15.392593   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:15.808099   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:15.888167   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:15.890664   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:16.319153   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:16.383635   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:16.383982   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:16.810799   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:16.875231   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:16.879453   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:17.315803   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:17.381604   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:17.385312   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:17.808680   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:17.892653   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:17.893025   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:18.598461   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:18.599020   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:18.599316   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:18.945025   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:18.947664   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:18.948223   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:19.310969   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:19.392531   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:19.394534   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:19.815763   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:19.892524   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:19.892903   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:20.315393   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:20.383238   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:20.383577   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:20.807918   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:20.886462   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:20.890607   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:21.314572   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:21.383435   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:21.387258   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:21.819847   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:22.253058   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:22.254347   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:22.308647   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:22.384565   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:22.386200   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:22.812939   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:22.892581   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:22.893141   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:23.322019   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:23.384380   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:23.385348   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:23.831772   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:23.897894   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:23.905337   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:24.312572   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:24.376436   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:24.383026   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:24.820237   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:24.882193   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:24.883907   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:25.328486   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:25.428277   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:25.428277   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:25.821228   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:25.886800   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:25.887255   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:26.313473   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:26.378361   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:26.384295   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:26.818874   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:26.883493   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:26.884575   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:27.312462   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:27.377089   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:27.381763   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:27.819201   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:27.883164   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:27.883490   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:28.313174   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:28.382861   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:28.384424   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:28.806062   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:28.888387   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:28.888689   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:29.313618   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:29.373909   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:29.388612   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:29.817164   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:29.880151   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:29.891788   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:30.848458   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:30.848940   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:30.849134   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:30.858705   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:30.892108   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:30.895095   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:31.455249   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:31.456229   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:31.456840   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:31.815221   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:31.880390   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:31.882936   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:32.333481   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:32.384006   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:32.384006   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:32.813954   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:32.886441   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:32.886581   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:33.314197   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:33.382007   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:33.388876   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:33.821151   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:33.885784   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:33.887322   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:34.308293   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:34.388556   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:34.390575   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:34.813478   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:34.891603   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:34.893837   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:35.318835   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:35.383864   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:35.384676   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:35.809731   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:35.888153   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:35.888332   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:36.313581   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:36.377970   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:36.381986   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:36.818305   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:36.882518   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:36.883287   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:37.672601   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:37.673585   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:37.674979   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:37.950750   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:37.950750   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:37.951722   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:38.318767   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:38.393639   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:38.395364   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:38.820158   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:38.890468   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:38.890991   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:39.308786   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:39.389289   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:39.391155   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:39.813734   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:39.879441   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:39.879981   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:40.308655   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:40.387791   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:40.387791   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:40.811959   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:40.893850   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:40.897269   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:41.318651   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:41.419403   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:41.422056   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:41.821104   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:41.892650   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:41.894680   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:42.311482   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:42.390613   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:42.390613   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:42.820541   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:42.886581   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:42.886581   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:43.314974   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:43.379173   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:43.379916   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:43.820862   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:43.886881   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:43.886881   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:44.314644   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:44.378744   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:44.380857   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:44.808015   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:44.889923   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:44.890083   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:45.316976   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:45.381677   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:45.381677   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:45.821444   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:45.886185   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:45.887189   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:46.314560   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:46.378775   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:46.383033   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:46.826010   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:46.891830   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:46.892264   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:47.316606   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:47.390957   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:47.391058   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:47.821750   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:47.884286   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:47.885231   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:48.310236   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:48.389970   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:48.389970   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:48.817977   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:48.887218   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:48.888619   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:49.310537   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:49.392139   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:49.392139   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:49.816316   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:50.051898   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:50.053126   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:50.308582   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:50.390176   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:50.390632   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:50.811972   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:50.893620   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:50.895536   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:51.871029   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:51.873371   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:51.873371   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:51.880374   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:51.880851   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:51.881174   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:52.315348   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:52.380760   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:52.384492   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:52.815959   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:52.880481   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:52.882484   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:53.319180   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:36:53.384897   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:53.385686   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:53.811393   11412 kapi.go:107] duration metric: took 1m9.0087012s to wait for kubernetes.io/minikube-addons=registry ...
	I0910 17:36:53.892698   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:53.893807   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:54.384195   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:54.385187   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:54.898385   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:54.898385   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:55.382829   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:55.382829   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:56.165265   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:56.165592   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:56.388471   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:56.388792   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:56.891201   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:56.891201   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:57.384481   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:57.387307   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:57.908652   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:57.908652   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:58.383668   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:58.383668   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:58.888470   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:58.889928   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:59.382911   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:36:59.383107   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:59.890290   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:36:59.890547   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:00.383966   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:00.384524   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:00.888983   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:00.889247   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:01.387109   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:01.387109   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:01.899123   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:01.899245   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:02.394559   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:02.394559   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:02.882776   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:02.883044   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:03.381975   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:03.386799   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:03.895362   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:03.896972   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:04.383943   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:04.387640   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:04.888426   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:04.889116   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:05.395851   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:05.396173   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:05.885346   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:05.885878   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:06.392810   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:06.393501   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:06.963469   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:06.965664   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:07.393808   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:07.396178   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:07.890930   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:07.895814   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:08.379377   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:08.381424   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:08.889892   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:08.893183   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:09.381174   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:09.383532   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:09.887507   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:09.887869   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:10.390391   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:10.392391   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:10.884095   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:10.885195   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:11.406132   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:11.416714   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:11.892784   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:11.930813   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:12.400329   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:12.403115   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:12.884638   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:12.884718   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:13.389747   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:13.392524   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:13.883763   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:13.884366   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:14.387599   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:14.389013   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:14.890520   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:14.890979   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:15.387798   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:15.387798   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:15.890725   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:15.892849   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:16.614678   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:16.615245   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:16.895756   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:16.895989   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:17.379269   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:17.382998   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:17.889228   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:17.889640   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:18.380270   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:18.385246   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:18.890615   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:18.890844   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:19.390657   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:19.391080   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:19.889193   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:19.889760   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:20.390890   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:20.391602   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:20.896121   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:20.896571   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:21.384514   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:21.384797   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:21.890201   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:21.890201   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:22.384139   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:22.384293   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:22.892799   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:22.893413   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:23.393542   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:23.394300   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:23.890367   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:23.891351   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:24.384993   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:24.389076   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:24.884125   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:24.884125   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:25.393996   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:25.394073   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:25.887364   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:25.891055   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:26.570984   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:26.572366   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:26.891658   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:26.893223   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:27.391473   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:27.391671   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:27.883998   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:27.884664   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:28.390934   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:28.390934   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:28.879808   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:28.883225   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:29.383762   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:29.384549   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:29.892277   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:29.893837   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:30.390036   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:30.390036   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:30.884368   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:30.884504   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:31.381929   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:31.385553   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:31.893902   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:31.894006   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:32.401232   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:32.404587   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:32.894743   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:32.894821   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:33.383125   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:33.384622   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:33.893174   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:33.898418   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:34.408023   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:34.409414   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:34.893017   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:34.893146   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:35.383248   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:35.385256   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:35.889143   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:35.889262   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:36.394906   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:36.399584   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:36.888583   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:36.888888   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:37.394623   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:37.395708   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:37.885052   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:37.885939   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:38.386385   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:38.388372   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:38.897665   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:38.897951   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:39.397987   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:39.399559   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:39.885653   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:39.887655   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:40.394863   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:40.397412   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:40.886732   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:40.887875   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:41.461574   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:41.462735   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:41.891495   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:41.891847   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:42.390176   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:42.390791   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:42.887749   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:42.887749   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:43.530521   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:43.543538   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:43.888308   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:43.888539   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:44.386225   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:44.386512   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:44.888351   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:44.890181   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:45.395533   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:45.396543   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:45.887744   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:45.887998   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:46.411082   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:46.411660   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:47.008614   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:47.010045   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:47.398037   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:47.401953   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:47.895487   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:47.896042   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:48.399768   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:48.402763   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:48.887628   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:48.887628   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:49.394708   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:49.395183   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:50.305066   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:50.308259   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:50.458372   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:50.460924   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:50.896285   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:37:50.896652   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:51.395605   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:51.396622   11412 kapi.go:107] duration metric: took 2m5.5214787s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0910 17:37:51.899231   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:52.386830   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:52.891314   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:53.398531   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:53.888939   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:54.391421   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:54.891412   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:55.391225   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:55.890945   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:56.393094   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:56.895833   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:57.393464   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:57.896334   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:58.393594   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:58.894896   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:59.400672   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:37:59.887153   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:00.393145   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:00.898907   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:01.391377   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:01.895457   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:02.400011   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:02.901745   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:03.394832   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:03.884965   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:04.392753   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:05.265826   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:05.718008   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:06.068268   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:06.399419   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:06.901261   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:07.390097   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:07.897869   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:08.387248   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:08.897687   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:09.387167   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:09.891796   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:10.395954   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:10.887054   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:11.396022   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:11.900761   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:12.575430   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:12.902215   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:13.393337   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:14.022085   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:14.389468   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:14.897921   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:15.401224   11412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:38:15.902700   11412 kapi.go:107] duration metric: took 2m35.0234939s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0910 17:38:32.029205   11412 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0910 17:38:32.029276   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:32.530521   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:33.015526   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:33.515691   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:34.029184   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:34.516044   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:35.016798   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:35.528219   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:36.026326   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:36.528427   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:37.025075   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:37.525794   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:38.022228   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:38.529971   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:39.031401   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:39.529859   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:40.029511   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:40.530266   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:41.027775   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:41.528891   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:42.028954   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:42.529912   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:43.029563   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:43.514861   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:44.029168   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:44.530297   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:45.030574   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:45.530016   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:46.017839   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:46.526048   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:47.029722   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:47.529135   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:48.030742   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:48.515798   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:49.019481   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:49.517066   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:50.030307   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:50.529990   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:51.020027   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:51.526076   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:52.029914   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:52.531485   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:53.018682   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:53.524786   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:54.026441   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:54.524521   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:55.023330   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:55.521288   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:56.022779   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:56.526032   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:57.025146   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:57.530209   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:58.030765   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:58.518276   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:59.016395   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:38:59.517339   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:39:00.030340   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:39:00.531656   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:39:01.029009   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:39:01.527971   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:39:02.028389   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:39:02.525529   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:39:03.022199   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:39:03.518965   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:39:04.023899   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:39:04.519955   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:39:05.029204   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:39:05.525776   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:39:06.029073   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:39:06.672850   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:39:07.029068   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:39:07.528252   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:39:08.016178   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:39:08.525776   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:39:09.180858   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:39:09.533246   11412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:39:10.017585   11412 kapi.go:107] duration metric: took 3m23.0060499s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0910 17:39:10.020270   11412 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-218100 cluster.
	I0910 17:39:10.022979   11412 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0910 17:39:10.024877   11412 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0910 17:39:10.030154   11412 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, nvidia-device-plugin, helm-tiller, metrics-server, cloud-spanner, yakd, volcano, inspektor-gadget, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0910 17:39:10.033859   11412 addons.go:510] duration metric: took 3m59.2092467s for enable addons: enabled=[ingress-dns storage-provisioner nvidia-device-plugin helm-tiller metrics-server cloud-spanner yakd volcano inspektor-gadget default-storageclass storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0910 17:39:10.033859   11412 start.go:246] waiting for cluster config update ...
	I0910 17:39:10.033859   11412 start.go:255] writing updated cluster config ...
	I0910 17:39:10.045311   11412 ssh_runner.go:195] Run: rm -f paused
	I0910 17:39:10.257567   11412 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 17:39:10.260053   11412 out.go:177] * Done! kubectl is now configured to use "addons-218100" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 10 17:50:27 addons-218100 dockerd[1434]: time="2024-09-10T17:50:27.277511600Z" level=warning msg="cleaning up after shim disconnected" id=f587ba25228cd9a6c8154d6c8479771bc1c111877eb65928f356fee0f91422e7 namespace=moby
	Sep 10 17:50:27 addons-218100 dockerd[1434]: time="2024-09-10T17:50:27.277523001Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 17:50:27 addons-218100 dockerd[1428]: time="2024-09-10T17:50:27.278418462Z" level=info msg="ignoring event" container=f587ba25228cd9a6c8154d6c8479771bc1c111877eb65928f356fee0f91422e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:50:27 addons-218100 dockerd[1434]: time="2024-09-10T17:50:27.472328215Z" level=info msg="shim disconnected" id=0b992c9b39e8605d7f4abd357491454839df74baa52640930570edc77d1169c9 namespace=moby
	Sep 10 17:50:27 addons-218100 dockerd[1434]: time="2024-09-10T17:50:27.472404720Z" level=warning msg="cleaning up after shim disconnected" id=0b992c9b39e8605d7f4abd357491454839df74baa52640930570edc77d1169c9 namespace=moby
	Sep 10 17:50:27 addons-218100 dockerd[1434]: time="2024-09-10T17:50:27.472416121Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 17:50:27 addons-218100 dockerd[1428]: time="2024-09-10T17:50:27.473225376Z" level=info msg="ignoring event" container=0b992c9b39e8605d7f4abd357491454839df74baa52640930570edc77d1169c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:50:32 addons-218100 dockerd[1428]: time="2024-09-10T17:50:32.797499257Z" level=info msg="ignoring event" container=e8d96d0b9b5d36b8823603c11cc7946deddb0572565817b15c51051c1eeb7ad6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:50:32 addons-218100 dockerd[1434]: time="2024-09-10T17:50:32.798272112Z" level=info msg="shim disconnected" id=e8d96d0b9b5d36b8823603c11cc7946deddb0572565817b15c51051c1eeb7ad6 namespace=moby
	Sep 10 17:50:32 addons-218100 dockerd[1434]: time="2024-09-10T17:50:32.798601735Z" level=warning msg="cleaning up after shim disconnected" id=e8d96d0b9b5d36b8823603c11cc7946deddb0572565817b15c51051c1eeb7ad6 namespace=moby
	Sep 10 17:50:32 addons-218100 dockerd[1434]: time="2024-09-10T17:50:32.799530501Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 17:50:33 addons-218100 dockerd[1428]: time="2024-09-10T17:50:33.016900657Z" level=info msg="ignoring event" container=c8789f1039e8743f1cba647570e22b3b5b3df676afbdf2aa907bcb18642a3b82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:50:33 addons-218100 dockerd[1434]: time="2024-09-10T17:50:33.017868626Z" level=info msg="shim disconnected" id=c8789f1039e8743f1cba647570e22b3b5b3df676afbdf2aa907bcb18642a3b82 namespace=moby
	Sep 10 17:50:33 addons-218100 dockerd[1434]: time="2024-09-10T17:50:33.018119443Z" level=warning msg="cleaning up after shim disconnected" id=c8789f1039e8743f1cba647570e22b3b5b3df676afbdf2aa907bcb18642a3b82 namespace=moby
	Sep 10 17:50:33 addons-218100 dockerd[1434]: time="2024-09-10T17:50:33.018136945Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 17:50:34 addons-218100 dockerd[1434]: time="2024-09-10T17:50:34.179295408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 17:50:34 addons-218100 dockerd[1434]: time="2024-09-10T17:50:34.179449419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 17:50:34 addons-218100 dockerd[1434]: time="2024-09-10T17:50:34.179485622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 17:50:34 addons-218100 dockerd[1434]: time="2024-09-10T17:50:34.179595230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 17:50:34 addons-218100 cri-dockerd[1324]: time="2024-09-10T17:50:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/20fd6cef6f8b8efae6044d2debc709438c52de71cf5588b578dfec7ad0f34962/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 10 17:50:38 addons-218100 cri-dockerd[1324]: time="2024-09-10T17:50:38Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Sep 10 17:50:38 addons-218100 dockerd[1428]: time="2024-09-10T17:50:38.818025983Z" level=info msg="ignoring event" container=f7279914fbb72cb51039313f3eb1e6b7a0a8b7f7e30e42c962765e02db4d7ad5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:50:38 addons-218100 dockerd[1434]: time="2024-09-10T17:50:38.818393509Z" level=info msg="shim disconnected" id=f7279914fbb72cb51039313f3eb1e6b7a0a8b7f7e30e42c962765e02db4d7ad5 namespace=moby
	Sep 10 17:50:38 addons-218100 dockerd[1434]: time="2024-09-10T17:50:38.819328274Z" level=warning msg="cleaning up after shim disconnected" id=f7279914fbb72cb51039313f3eb1e6b7a0a8b7f7e30e42c962765e02db4d7ad5 namespace=moby
	Sep 10 17:50:38 addons-218100 dockerd[1434]: time="2024-09-10T17:50:38.819566591Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f7279914fbb72       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                        23 seconds ago      Running             headlamp                  0                   d873e2974780c       headlamp-57fb76fcdb-hdbzv
	a6835ce36d96b       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            2 minutes ago       Exited              gadget                    7                   28f556908c160       gadget-gdgns
	1c082d745cdd1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 11 minutes ago      Running             gcp-auth                  0                   fa8341fe02679       gcp-auth-89d5ffd79-txdmj
	4a6ee6709644a       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             12 minutes ago      Running             controller                0                   f7e48b4eb1a5f       ingress-nginx-controller-bc57996ff-99vlb
	86df46718c660       ce263a8653f9c                                                                                                                13 minutes ago      Exited              patch                     1                   5ebcfbbafe1ad       ingress-nginx-admission-patch-jmnhh
	2af53af2e4646       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   13 minutes ago      Exited              create                    0                   9d8b1aee66a55       ingress-nginx-admission-create-klv5s
	892ca4336c229       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             14 minutes ago      Running             minikube-ingress-dns      0                   a13237f167880       kube-ingress-dns-minikube
	c567742cbb078       6e38f40d628db                                                                                                                15 minutes ago      Running             storage-provisioner       0                   9e1fb70b62208       storage-provisioner
	03ffd8a7b2233       cbb01a7bd410d                                                                                                                15 minutes ago      Running             coredns                   0                   f420b4e6bfd80       coredns-6f6b679f8f-2mg7w
	2e3f547e4a4a4       ad83b2ca7b09e                                                                                                                15 minutes ago      Running             kube-proxy                0                   644d32cd733ee       kube-proxy-2mqnm
	6bf58809458b1       604f5db92eaa8                                                                                                                15 minutes ago      Running             kube-apiserver            0                   f54793392675f       kube-apiserver-addons-218100
	dc49a86152090       045733566833c                                                                                                                15 minutes ago      Running             kube-controller-manager   0                   f9e99378f00d4       kube-controller-manager-addons-218100
	b787770f91819       2e96e5913fc06                                                                                                                15 minutes ago      Running             etcd                      0                   87bbe03cc95a1       etcd-addons-218100
	7599fc3800563       1766f54c897f0                                                                                                                15 minutes ago      Running             kube-scheduler            0                   1b6e3c183845c       kube-scheduler-addons-218100
	
	
	==> controller_ingress [4a6ee6709644] <==
	I0910 17:38:15.215857       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"16bdea22-12c9-4cf9-b2f9-562905c6e547", APIVersion:"v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0910 17:38:15.215943       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"cda5b5d0-3ae1-439e-a942-23286693045d", APIVersion:"v1", ResourceVersion:"727", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0910 17:38:16.397362       7 nginx.go:317] "Starting NGINX process"
	I0910 17:38:16.397559       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0910 17:38:16.399508       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0910 17:38:16.400604       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0910 17:38:16.415794       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0910 17:38:16.416754       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-99vlb"
	I0910 17:38:16.433323       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-99vlb" node="addons-218100"
	I0910 17:38:16.454274       7 controller.go:213] "Backend successfully reloaded"
	I0910 17:38:16.454821       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-99vlb", UID:"171c9646-5d24-400f-bb3b-16256dcc3241", APIVersion:"v1", ResourceVersion:"759", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0910 17:38:16.454871       7 controller.go:224] "Initial sync, sleeping for 1 second"
	W0910 17:50:33.246042       7 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0910 17:50:33.286373       7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.041s renderingIngressLength:1 renderingIngressTime:0s admissionTime:0.041s testedConfigurationSize:18.1kB}
	I0910 17:50:33.286403       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0910 17:50:33.297430       7 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	W0910 17:50:33.298019       7 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0910 17:50:33.298243       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"5c98cd5e-81a1-44ea-8af9-1c1e69f4158c", APIVersion:"networking.k8s.io/v1", ResourceVersion:"3183", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0910 17:50:33.298855       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0910 17:50:33.376815       7 controller.go:213] "Backend successfully reloaded"
	I0910 17:50:33.377577       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-99vlb", UID:"171c9646-5d24-400f-bb3b-16256dcc3241", APIVersion:"v1", ResourceVersion:"759", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0910 17:50:36.631852       7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0910 17:50:36.632045       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0910 17:50:36.695525       7 controller.go:213] "Backend successfully reloaded"
	I0910 17:50:36.696370       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-99vlb", UID:"171c9646-5d24-400f-bb3b-16256dcc3241", APIVersion:"v1", ResourceVersion:"759", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [03ffd8a7b223] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.10:40301 - 26089 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000308017s
	[INFO] 10.244.0.10:40301 - 2541 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000305416s
	[INFO] 10.244.0.10:52698 - 36442 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000145608s
	[INFO] 10.244.0.10:52698 - 52565 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000120706s
	[INFO] 10.244.0.10:53769 - 24205 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00018021s
	[INFO] 10.244.0.10:53769 - 30836 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000093305s
	[INFO] 10.244.0.10:56981 - 49252 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000171909s
	[INFO] 10.244.0.10:56981 - 2658 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000276615s
	[INFO] 10.244.0.10:51925 - 27239 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000066304s
	[INFO] 10.244.0.10:51925 - 1898 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000164909s
	[INFO] 10.244.0.10:43588 - 4662 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081205s
	[INFO] 10.244.0.10:43588 - 33995 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000089204s
	[INFO] 10.244.0.10:36049 - 10549 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000071104s
	[INFO] 10.244.0.10:36049 - 62007 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000130807s
	[INFO] 10.244.0.10:41653 - 1545 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000127507s
	[INFO] 10.244.0.10:41653 - 15375 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000059903s
	[INFO] 10.244.0.26:59585 - 12190 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000400124s
	[INFO] 10.244.0.26:45313 - 51938 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000153309s
	[INFO] 10.244.0.26:52649 - 48775 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.006402876s
	[INFO] 10.244.0.26:43686 - 64263 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000373122s
	[INFO] 10.244.0.26:57519 - 12511 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101006s
	[INFO] 10.244.0.26:54550 - 36255 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000146009s
	[INFO] 10.244.0.26:50984 - 52025 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.004291552s
	[INFO] 10.244.0.26:39820 - 56300 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.009732371s
	
	
	==> describe nodes <==
	Name:               addons-218100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-218100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=addons-218100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T17_35_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-218100
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:35:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-218100
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 17:50:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 17:50:13 +0000   Tue, 10 Sep 2024 17:35:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 17:50:13 +0000   Tue, 10 Sep 2024 17:35:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 17:50:13 +0000   Tue, 10 Sep 2024 17:35:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 17:50:13 +0000   Tue, 10 Sep 2024 17:35:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.219.103
	  Hostname:    addons-218100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 36302d971d154cd69d3d8027a9db9de9
	  System UUID:                88813ff4-1a79-3843-a100-2c6693233b82
	  Boot ID:                    9aa4b89b-0103-4fbe-9377-7a028aa35140
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	  gadget                      gadget-gdgns                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  gcp-auth                    gcp-auth-89d5ffd79-txdmj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  headlamp                    headlamp-57fb76fcdb-hdbzv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-99vlb    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         14m
	  kube-system                 coredns-6f6b679f8f-2mg7w                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 etcd-addons-218100                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-218100                250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-218100       200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-2mqnm                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-218100                100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node addons-218100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node addons-218100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node addons-218100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node addons-218100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node addons-218100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node addons-218100 status is now: NodeHasSufficientPID
	  Normal  NodeReady                15m                kubelet          Node addons-218100 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node addons-218100 event: Registered Node addons-218100 in Controller
	
	
	==> dmesg <==
	[  +9.177930] kauditd_printk_skb: 12 callbacks suppressed
	[Sep10 17:39] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.197676] kauditd_printk_skb: 40 callbacks suppressed
	[ +32.564253] kauditd_printk_skb: 2 callbacks suppressed
	[Sep10 17:40] kauditd_printk_skb: 20 callbacks suppressed
	[ +19.720182] kauditd_printk_skb: 21 callbacks suppressed
	[Sep10 17:43] kauditd_printk_skb: 36 callbacks suppressed
	[Sep10 17:48] kauditd_printk_skb: 28 callbacks suppressed
	[  +4.418733] hrtimer: interrupt took 3614601 ns
	[ +20.604764] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.985048] kauditd_printk_skb: 15 callbacks suppressed
	[Sep10 17:49] kauditd_printk_skb: 27 callbacks suppressed
	[  +8.006262] kauditd_printk_skb: 19 callbacks suppressed
	[  +8.198696] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.252432] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.990311] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.013552] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.002455] kauditd_printk_skb: 8 callbacks suppressed
	[  +8.263035] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.242088] kauditd_printk_skb: 2 callbacks suppressed
	[Sep10 17:50] kauditd_printk_skb: 5 callbacks suppressed
	[ +11.069515] kauditd_printk_skb: 22 callbacks suppressed
	[  +8.964524] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.817133] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.768005] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [b787770f9181] <==
	{"level":"warn","ts":"2024-09-10T17:39:41.586927Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-10T17:39:40.930523Z","time spent":"656.397262ms","remote":"127.0.0.1:48118","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1137,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-09-10T17:39:41.587389Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.877897ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:39:41.587468Z","caller":"traceutil/trace.go:171","msg":"trace[531755584] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1668; }","duration":"272.957402ms","start":"2024-09-10T17:39:41.314502Z","end":"2024-09-10T17:39:41.587460Z","steps":["trace[531755584] 'agreement among raft nodes before linearized reading'  (duration: 272.864296ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:39:41.925126Z","caller":"traceutil/trace.go:171","msg":"trace[35351899] linearizableReadLoop","detail":"{readStateIndex:1750; appliedIndex:1749; }","duration":"301.269982ms","start":"2024-09-10T17:39:41.623837Z","end":"2024-09-10T17:39:41.925107Z","steps":["trace[35351899] 'read index received'  (duration: 208.528678ms)","trace[35351899] 'applied index is now lower than readState.Index'  (duration: 92.740704ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-10T17:39:41.925218Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"301.375488ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:39:41.925238Z","caller":"traceutil/trace.go:171","msg":"trace[2112496843] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1669; }","duration":"301.40539ms","start":"2024-09-10T17:39:41.623826Z","end":"2024-09-10T17:39:41.925232Z","steps":["trace[2112496843] 'agreement among raft nodes before linearized reading'  (duration: 301.338586ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:39:41.926574Z","caller":"traceutil/trace.go:171","msg":"trace[1352076291] transaction","detail":"{read_only:false; response_revision:1669; number_of_response:1; }","duration":"327.969166ms","start":"2024-09-10T17:39:41.598555Z","end":"2024-09-10T17:39:41.926524Z","steps":["trace[1352076291] 'process raft request'  (duration: 233.766175ms)","trace[1352076291] 'compare'  (duration: 92.433486ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-10T17:39:41.926745Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-10T17:39:41.598540Z","time spent":"328.069572ms","remote":"127.0.0.1:48118","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1665 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-10T17:45:00.596034Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1706}
	{"level":"info","ts":"2024-09-10T17:45:00.959198Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1706,"took":"350.721478ms","hash":3118426347,"current-db-size-bytes":9269248,"current-db-size":"9.3 MB","current-db-size-in-use-bytes":5394432,"current-db-size-in-use":"5.4 MB"}
	{"level":"info","ts":"2024-09-10T17:45:00.959381Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3118426347,"revision":1706,"compact-revision":-1}
	{"level":"info","ts":"2024-09-10T17:50:00.619105Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2292}
	{"level":"info","ts":"2024-09-10T17:50:00.659841Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2292,"took":"39.952599ms","hash":1647921740,"current-db-size-bytes":9269248,"current-db-size":"9.3 MB","current-db-size-in-use-bytes":4055040,"current-db-size-in-use":"4.1 MB"}
	{"level":"info","ts":"2024-09-10T17:50:00.659961Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1647921740,"revision":2292,"compact-revision":1706}
	{"level":"warn","ts":"2024-09-10T17:50:17.833221Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.91358ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:50:17.833276Z","caller":"traceutil/trace.go:171","msg":"trace[1191211835] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:3107; }","duration":"216.977984ms","start":"2024-09-10T17:50:17.616287Z","end":"2024-09-10T17:50:17.833265Z","steps":["trace[1191211835] 'range keys from in-memory index tree'  (duration: 216.902979ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:50:17.833614Z","caller":"traceutil/trace.go:171","msg":"trace[2031589117] transaction","detail":"{read_only:false; response_revision:3108; number_of_response:1; }","duration":"795.208149ms","start":"2024-09-10T17:50:17.038396Z","end":"2024-09-10T17:50:17.833604Z","steps":["trace[2031589117] 'process raft request'  (duration: 785.9283ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:50:17.833754Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-10T17:50:17.038379Z","time spent":"795.258852ms","remote":"127.0.0.1:48118","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:3105 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-10T17:50:17.843630Z","caller":"traceutil/trace.go:171","msg":"trace[713642110] linearizableReadLoop","detail":"{readStateIndex:3337; appliedIndex:3335; }","duration":"243.286126ms","start":"2024-09-10T17:50:17.600330Z","end":"2024-09-10T17:50:17.843615Z","steps":["trace[713642110] 'read index received'  (duration: 223.988175ms)","trace[713642110] 'applied index is now lower than readState.Index'  (duration: 19.297151ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-10T17:50:17.844178Z","caller":"traceutil/trace.go:171","msg":"trace[1670582046] transaction","detail":"{read_only:false; response_revision:3109; number_of_response:1; }","duration":"393.694351ms","start":"2024-09-10T17:50:17.450471Z","end":"2024-09-10T17:50:17.844165Z","steps":["trace[1670582046] 'process raft request'  (duration: 393.022904ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:50:17.844696Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-10T17:50:17.450451Z","time spent":"393.83206ms","remote":"127.0.0.1:48230","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-xhiehgzglav37nd7ke6uuikmne\" mod_revision:3065 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-xhiehgzglav37nd7ke6uuikmne\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-xhiehgzglav37nd7ke6uuikmne\" > >"}
	{"level":"warn","ts":"2024-09-10T17:50:17.845008Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"244.675223ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:50:17.845576Z","caller":"traceutil/trace.go:171","msg":"trace[184962822] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:3110; }","duration":"245.241362ms","start":"2024-09-10T17:50:17.600325Z","end":"2024-09-10T17:50:17.845566Z","steps":["trace[184962822] 'agreement among raft nodes before linearized reading'  (duration: 244.659521ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:50:17.845191Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.75194ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:50:17.846577Z","caller":"traceutil/trace.go:171","msg":"trace[1683510779] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:3110; }","duration":"109.138337ms","start":"2024-09-10T17:50:17.737431Z","end":"2024-09-10T17:50:17.846569Z","steps":["trace[1683510779] 'agreement among raft nodes before linearized reading'  (duration: 107.738339ms)"],"step_count":1}
	
	
	==> gcp-auth [1c082d745cdd] <==
	2024/09/10 17:40:12 Ready to write response ...
	2024/09/10 17:40:12 Ready to marshal response ...
	2024/09/10 17:40:12 Ready to write response ...
	2024/09/10 17:48:54 Ready to marshal response ...
	2024/09/10 17:48:54 Ready to write response ...
	2024/09/10 17:48:54 Ready to marshal response ...
	2024/09/10 17:48:54 Ready to write response ...
	2024/09/10 17:49:03 Ready to marshal response ...
	2024/09/10 17:49:03 Ready to write response ...
	2024/09/10 17:49:07 Ready to marshal response ...
	2024/09/10 17:49:07 Ready to write response ...
	2024/09/10 17:49:15 Ready to marshal response ...
	2024/09/10 17:49:15 Ready to write response ...
	2024/09/10 17:49:23 Ready to marshal response ...
	2024/09/10 17:49:23 Ready to write response ...
	2024/09/10 17:49:36 Ready to marshal response ...
	2024/09/10 17:49:36 Ready to write response ...
	2024/09/10 17:50:09 Ready to marshal response ...
	2024/09/10 17:50:09 Ready to write response ...
	2024/09/10 17:50:09 Ready to marshal response ...
	2024/09/10 17:50:09 Ready to write response ...
	2024/09/10 17:50:09 Ready to marshal response ...
	2024/09/10 17:50:09 Ready to write response ...
	2024/09/10 17:50:33 Ready to marshal response ...
	2024/09/10 17:50:33 Ready to write response ...
	
	
	==> kernel <==
	 17:50:39 up 17 min,  0 users,  load average: 1.22, 0.89, 0.81
	Linux addons-218100 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6bf58809458b] <==
	I0910 17:40:03.324757       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0910 17:40:03.746110       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0910 17:40:03.746180       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0910 17:40:03.833743       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0910 17:40:03.876981       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0910 17:40:04.324909       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0910 17:40:04.678611       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0910 17:49:14.450122       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0910 17:50:06.606609       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:50:06.606719       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:50:06.653416       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:50:06.653477       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:50:06.705374       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:50:06.705697       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:50:06.707094       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:50:06.707128       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:50:06.751703       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:50:06.751738       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0910 17:50:07.708318       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0910 17:50:07.752600       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0910 17:50:07.857212       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0910 17:50:09.603840       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.158.8"}
	I0910 17:50:26.531062       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0910 17:50:33.288130       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0910 17:50:33.740249       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.213.23"}
	
	
	==> kube-controller-manager [dc49a8615209] <==
	E0910 17:50:14.880312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:50:15.467410       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:50:15.467549       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:50:15.601641       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:50:15.601731       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0910 17:50:18.800377       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	I0910 17:50:18.842124       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="108.907µs"
	I0910 17:50:18.886218       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="16.135928ms"
	I0910 17:50:18.888742       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="95.907µs"
	I0910 17:50:19.634351       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="6.4µs"
	W0910 17:50:23.997259       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:50:23.997305       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:50:25.046517       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:50:25.046583       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:50:25.569352       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:50:25.569467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:50:26.531146       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:50:26.531466       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0910 17:50:27.082204       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="6.701µs"
	W0910 17:50:28.266934       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:50:28.266981       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:50:32.390471       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:50:32.390618       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0910 17:50:37.324879       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0910 17:50:38.666923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="5.3µs"
	
	
	==> kube-proxy [2e3f547e4a4a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 17:35:23.893546       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 17:35:24.059148       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.31.219.103"]
	E0910 17:35:24.059340       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 17:35:24.275141       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 17:35:24.275281       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 17:35:24.275322       1 server_linux.go:169] "Using iptables Proxier"
	I0910 17:35:24.281009       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 17:35:24.285237       1 server.go:483] "Version info" version="v1.31.0"
	I0910 17:35:24.293761       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 17:35:24.307938       1 config.go:197] "Starting service config controller"
	I0910 17:35:24.308444       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 17:35:24.308143       1 config.go:104] "Starting endpoint slice config controller"
	I0910 17:35:24.308918       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 17:35:24.309345       1 config.go:326] "Starting node config controller"
	I0910 17:35:24.309562       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 17:35:24.409583       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 17:35:24.409708       1 shared_informer.go:320] Caches are synced for service config
	I0910 17:35:24.410058       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7599fc380056] <==
	W0910 17:35:03.595723       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0910 17:35:03.596090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:35:03.676571       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 17:35:03.676630       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:35:03.720831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0910 17:35:03.720944       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 17:35:03.816977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0910 17:35:03.817114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:35:03.880867       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0910 17:35:03.880977       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0910 17:35:03.900469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0910 17:35:03.900816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:35:03.920175       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0910 17:35:03.920221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 17:35:03.935336       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0910 17:35:03.935437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:35:03.989992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0910 17:35:03.990059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:35:04.054826       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0910 17:35:04.055023       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 17:35:04.057490       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 17:35:04.057519       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:35:04.144996       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0910 17:35:04.145035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0910 17:35:06.059825       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 17:50:33 addons-218100 kubelet[2229]: I0910 17:50:33.276723    2229 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b805867a-0e07-4d26-b1f5-ec0f6d63bc5e-device-plugin" (OuterVolumeSpecName: "device-plugin") pod "b805867a-0e07-4d26-b1f5-ec0f6d63bc5e" (UID: "b805867a-0e07-4d26-b1f5-ec0f6d63bc5e"). InnerVolumeSpecName "device-plugin". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 10 17:50:33 addons-218100 kubelet[2229]: I0910 17:50:33.280452    2229 scope.go:117] "RemoveContainer" containerID="e8d96d0b9b5d36b8823603c11cc7946deddb0572565817b15c51051c1eeb7ad6"
	Sep 10 17:50:33 addons-218100 kubelet[2229]: E0910 17:50:33.281969    2229 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: e8d96d0b9b5d36b8823603c11cc7946deddb0572565817b15c51051c1eeb7ad6" containerID="e8d96d0b9b5d36b8823603c11cc7946deddb0572565817b15c51051c1eeb7ad6"
	Sep 10 17:50:33 addons-218100 kubelet[2229]: I0910 17:50:33.281997    2229 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e8d96d0b9b5d36b8823603c11cc7946deddb0572565817b15c51051c1eeb7ad6"} err="failed to get container status \"e8d96d0b9b5d36b8823603c11cc7946deddb0572565817b15c51051c1eeb7ad6\": rpc error: code = Unknown desc = Error response from daemon: No such container: e8d96d0b9b5d36b8823603c11cc7946deddb0572565817b15c51051c1eeb7ad6"
	Sep 10 17:50:33 addons-218100 kubelet[2229]: I0910 17:50:33.288548    2229 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b805867a-0e07-4d26-b1f5-ec0f6d63bc5e-kube-api-access-sbbtr" (OuterVolumeSpecName: "kube-api-access-sbbtr") pod "b805867a-0e07-4d26-b1f5-ec0f6d63bc5e" (UID: "b805867a-0e07-4d26-b1f5-ec0f6d63bc5e"). InnerVolumeSpecName "kube-api-access-sbbtr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 10 17:50:33 addons-218100 kubelet[2229]: I0910 17:50:33.377242    2229 reconciler_common.go:288] "Volume detached for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/b805867a-0e07-4d26-b1f5-ec0f6d63bc5e-device-plugin\") on node \"addons-218100\" DevicePath \"\""
	Sep 10 17:50:33 addons-218100 kubelet[2229]: I0910 17:50:33.377394    2229 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-sbbtr\" (UniqueName: \"kubernetes.io/projected/b805867a-0e07-4d26-b1f5-ec0f6d63bc5e-kube-api-access-sbbtr\") on node \"addons-218100\" DevicePath \"\""
	Sep 10 17:50:33 addons-218100 kubelet[2229]: E0910 17:50:33.675721    2229 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b805867a-0e07-4d26-b1f5-ec0f6d63bc5e" containerName="nvidia-device-plugin-ctr"
	Sep 10 17:50:33 addons-218100 kubelet[2229]: E0910 17:50:33.675768    2229 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="347bd0df-17e0-49d7-9fea-54ec3391c705" containerName="registry"
	Sep 10 17:50:33 addons-218100 kubelet[2229]: E0910 17:50:33.675781    2229 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8975ac76-f502-459b-8189-457e92575e4b" containerName="registry-proxy"
	Sep 10 17:50:33 addons-218100 kubelet[2229]: E0910 17:50:33.675792    2229 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e28d2710-3544-4f47-9323-a24c31623505" containerName="yakd"
	Sep 10 17:50:33 addons-218100 kubelet[2229]: I0910 17:50:33.675829    2229 memory_manager.go:354] "RemoveStaleState removing state" podUID="347bd0df-17e0-49d7-9fea-54ec3391c705" containerName="registry"
	Sep 10 17:50:33 addons-218100 kubelet[2229]: I0910 17:50:33.675839    2229 memory_manager.go:354] "RemoveStaleState removing state" podUID="8975ac76-f502-459b-8189-457e92575e4b" containerName="registry-proxy"
	Sep 10 17:50:33 addons-218100 kubelet[2229]: I0910 17:50:33.675847    2229 memory_manager.go:354] "RemoveStaleState removing state" podUID="e28d2710-3544-4f47-9323-a24c31623505" containerName="yakd"
	Sep 10 17:50:33 addons-218100 kubelet[2229]: I0910 17:50:33.675855    2229 memory_manager.go:354] "RemoveStaleState removing state" podUID="b805867a-0e07-4d26-b1f5-ec0f6d63bc5e" containerName="nvidia-device-plugin-ctr"
	Sep 10 17:50:33 addons-218100 kubelet[2229]: I0910 17:50:33.780209    2229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mslkk\" (UniqueName: \"kubernetes.io/projected/fd53a022-7dee-4a6d-ae4b-02d2f2a5edfa-kube-api-access-mslkk\") pod \"nginx\" (UID: \"fd53a022-7dee-4a6d-ae4b-02d2f2a5edfa\") " pod="default/nginx"
	Sep 10 17:50:33 addons-218100 kubelet[2229]: I0910 17:50:33.780364    2229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/fd53a022-7dee-4a6d-ae4b-02d2f2a5edfa-gcp-creds\") pod \"nginx\" (UID: \"fd53a022-7dee-4a6d-ae4b-02d2f2a5edfa\") " pod="default/nginx"
	Sep 10 17:50:34 addons-218100 kubelet[2229]: I0910 17:50:34.094403    2229 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b805867a-0e07-4d26-b1f5-ec0f6d63bc5e" path="/var/lib/kubelet/pods/b805867a-0e07-4d26-b1f5-ec0f6d63bc5e/volumes"
	Sep 10 17:50:34 addons-218100 kubelet[2229]: I0910 17:50:34.360925    2229 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20fd6cef6f8b8efae6044d2debc709438c52de71cf5588b578dfec7ad0f34962"
	Sep 10 17:50:39 addons-218100 kubelet[2229]: I0910 17:50:39.833231    2229 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn8qg\" (UniqueName: \"kubernetes.io/projected/c6fb41cb-1074-44ec-a8a2-394369f7f222-kube-api-access-gn8qg\") pod \"c6fb41cb-1074-44ec-a8a2-394369f7f222\" (UID: \"c6fb41cb-1074-44ec-a8a2-394369f7f222\") "
	Sep 10 17:50:39 addons-218100 kubelet[2229]: I0910 17:50:39.833284    2229 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c6fb41cb-1074-44ec-a8a2-394369f7f222-gcp-creds\") pod \"c6fb41cb-1074-44ec-a8a2-394369f7f222\" (UID: \"c6fb41cb-1074-44ec-a8a2-394369f7f222\") "
	Sep 10 17:50:39 addons-218100 kubelet[2229]: I0910 17:50:39.833507    2229 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6fb41cb-1074-44ec-a8a2-394369f7f222-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "c6fb41cb-1074-44ec-a8a2-394369f7f222" (UID: "c6fb41cb-1074-44ec-a8a2-394369f7f222"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 10 17:50:39 addons-218100 kubelet[2229]: I0910 17:50:39.839940    2229 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6fb41cb-1074-44ec-a8a2-394369f7f222-kube-api-access-gn8qg" (OuterVolumeSpecName: "kube-api-access-gn8qg") pod "c6fb41cb-1074-44ec-a8a2-394369f7f222" (UID: "c6fb41cb-1074-44ec-a8a2-394369f7f222"). InnerVolumeSpecName "kube-api-access-gn8qg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 10 17:50:39 addons-218100 kubelet[2229]: I0910 17:50:39.934974    2229 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gn8qg\" (UniqueName: \"kubernetes.io/projected/c6fb41cb-1074-44ec-a8a2-394369f7f222-kube-api-access-gn8qg\") on node \"addons-218100\" DevicePath \"\""
	Sep 10 17:50:39 addons-218100 kubelet[2229]: I0910 17:50:39.935086    2229 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c6fb41cb-1074-44ec-a8a2-394369f7f222-gcp-creds\") on node \"addons-218100\" DevicePath \"\""
	
	
	==> storage-provisioner [c567742cbb07] <==
	I0910 17:35:41.713175       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 17:35:41.768174       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 17:35:41.768228       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0910 17:35:41.790424       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0910 17:35:41.793024       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"22f7675f-7dcc-4440-b160-38924a8d940a", APIVersion:"v1", ResourceVersion:"771", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-218100_311544da-86b6-41e9-97e7-5e28aafe666b became leader
	I0910 17:35:41.793072       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-218100_311544da-86b6-41e9-97e7-5e28aafe666b!
	I0910 17:35:41.893751       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-218100_311544da-86b6-41e9-97e7-5e28aafe666b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-218100 -n addons-218100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-218100 -n addons-218100: (10.9636228s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-218100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox gadget-gdgns ingress-nginx-admission-create-klv5s ingress-nginx-admission-patch-jmnhh
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-218100 describe pod busybox gadget-gdgns ingress-nginx-admission-create-klv5s ingress-nginx-admission-patch-jmnhh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-218100 describe pod busybox gadget-gdgns ingress-nginx-admission-create-klv5s ingress-nginx-admission-patch-jmnhh: exit status 1 (211.7562ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-218100/172.31.219.103
	Start Time:       Tue, 10 Sep 2024 17:40:12 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n4k76 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n4k76:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason          Age                  From               Message
	  ----     ------          ----                 ----               -------
	  Normal   Scheduled       10m                  default-scheduler  Successfully assigned default/busybox to addons-218100
	  Normal   SandboxChanged  10m                  kubelet            Pod sandbox changed, it will be killed and re-created.
	  Warning  Failed          9m17s (x6 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling         9m2s (x4 over 10m)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed          9m2s (x4 over 10m)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed          9m2s (x4 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff         34s (x43 over 10m)   kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "gadget-gdgns" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-klv5s" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-jmnhh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-218100 describe pod busybox gadget-gdgns ingress-nginx-admission-create-klv5s ingress-nginx-admission-patch-jmnhh: exit status 1
--- FAIL: TestAddons/parallel/Registry (118.82s)

                                                
                                    
x
+
TestErrorSpam/setup (174.14s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-885900 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 --driver=hyperv
E0910 17:54:10.371885    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 17:54:10.400808    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 17:54:10.432799    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 17:54:10.478205    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 17:54:10.541610    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 17:54:10.651099    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 17:54:10.840081    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 17:54:11.184803    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 17:54:11.852704    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 17:54:13.150279    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 17:54:15.734193    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 17:54:20.879494    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 17:54:31.150752    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 17:54:51.655728    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 17:55:32.642535    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-885900 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 --driver=hyperv: (2m54.1371506s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube VM"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-885900] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
- KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
- MINIKUBE_LOCATION=19598
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-885900" primary control-plane node in "nospam-885900" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-885900" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (174.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (29.84s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:735: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-879800 -n functional-879800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-879800 -n functional-879800: (10.5318627s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 logs -n 25: (7.6715262s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-885900 --log_dir                                     | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:56 UTC | 10 Sep 24 17:57 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-885900 --log_dir                                     | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:57 UTC | 10 Sep 24 17:57 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-885900 --log_dir                                     | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:57 UTC | 10 Sep 24 17:57 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-885900 --log_dir                                     | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:57 UTC | 10 Sep 24 17:57 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-885900 --log_dir                                     | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:57 UTC | 10 Sep 24 17:57 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-885900 --log_dir                                     | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:57 UTC | 10 Sep 24 17:58 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-885900 --log_dir                                     | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:58 UTC | 10 Sep 24 17:58 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-885900                                            | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:58 UTC | 10 Sep 24 17:58 UTC |
	| start   | -p functional-879800                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:58 UTC | 10 Sep 24 18:02 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-879800                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:02 UTC | 10 Sep 24 18:04 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-879800 cache add                                 | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:04 UTC | 10 Sep 24 18:04 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-879800 cache add                                 | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:04 UTC | 10 Sep 24 18:04 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-879800 cache add                                 | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:04 UTC | 10 Sep 24 18:04 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-879800 cache add                                 | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:04 UTC | 10 Sep 24 18:05 UTC |
	|         | minikube-local-cache-test:functional-879800                 |                   |                   |         |                     |                     |
	| cache   | functional-879800 cache delete                              | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | minikube-local-cache-test:functional-879800                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	| ssh     | functional-879800 ssh sudo                                  | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-879800                                           | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-879800 ssh                                       | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-879800 cache reload                              | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	| ssh     | functional-879800 ssh                                       | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-879800 kubectl --                                | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | --context functional-879800                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:02:06
	Running on machine: minikube5
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:02:06.500576    5904 out.go:345] Setting OutFile to fd 672 ...
	I0910 18:02:06.559815    5904 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:02:06.559815    5904 out.go:358] Setting ErrFile to fd 700...
	I0910 18:02:06.560671    5904 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:02:06.579047    5904 out.go:352] Setting JSON to false
	I0910 18:02:06.581985    5904 start.go:129] hostinfo: {"hostname":"minikube5","uptime":102589,"bootTime":1725888736,"procs":184,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4842 Build 19045.4842","kernelVersion":"10.0.19045.4842 Build 19045.4842","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0910 18:02:06.581985    5904 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 18:02:06.586039    5904 out.go:177] * [functional-879800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	I0910 18:02:06.590731    5904 notify.go:220] Checking for updates...
	I0910 18:02:06.590731    5904 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:02:06.593313    5904 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:02:06.595578    5904 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0910 18:02:06.597547    5904 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:02:06.599694    5904 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:02:06.603227    5904 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:02:06.603828    5904 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:02:11.439496    5904 out.go:177] * Using the hyperv driver based on existing profile
	I0910 18:02:11.441769    5904 start.go:297] selected driver: hyperv
	I0910 18:02:11.441769    5904 start.go:901] validating driver "hyperv" against &{Name:functional-879800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:functional-879800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.92 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:02:11.442301    5904 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:02:11.485493    5904 cni.go:84] Creating CNI manager for ""
	I0910 18:02:11.485563    5904 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 18:02:11.485795    5904 start.go:340] cluster config:
	{Name:functional-879800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-879800 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.92 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:02:11.485795    5904 iso.go:125] acquiring lock: {Name:mkb663b978f90cccd76348bddb3ea027f6ac4252 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:02:11.490169    5904 out.go:177] * Starting "functional-879800" primary control-plane node in "functional-879800" cluster
	I0910 18:02:11.493294    5904 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 18:02:11.493460    5904 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0910 18:02:11.493460    5904 cache.go:56] Caching tarball of preloaded images
	I0910 18:02:11.494006    5904 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0910 18:02:11.494195    5904 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 18:02:11.494357    5904 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-879800\config.json ...
	I0910 18:02:11.495502    5904 start.go:360] acquireMachinesLock for functional-879800: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:02:11.496128    5904 start.go:364] duration metric: took 626.2µs to acquireMachinesLock for "functional-879800"
	I0910 18:02:11.496266    5904 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:02:11.496266    5904 fix.go:54] fixHost starting: 
	I0910 18:02:11.496266    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:02:14.038361    5904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:02:14.038361    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:14.038361    5904 fix.go:112] recreateIfNeeded on functional-879800: state=Running err=<nil>
	W0910 18:02:14.038442    5904 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:02:14.041771    5904 out.go:177] * Updating the running hyperv "functional-879800" VM ...
	I0910 18:02:14.044197    5904 machine.go:93] provisionDockerMachine start ...
	I0910 18:02:14.044280    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:02:16.048524    5904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:02:16.048524    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:16.048524    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:02:18.405440    5904 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:02:18.405440    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:18.409692    5904 main.go:141] libmachine: Using SSH client type: native
	I0910 18:02:18.410339    5904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:02:18.410339    5904 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:02:18.548078    5904 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-879800
	
	I0910 18:02:18.548078    5904 buildroot.go:166] provisioning hostname "functional-879800"
	I0910 18:02:18.548078    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:02:20.506239    5904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:02:20.506239    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:20.506239    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:02:22.840045    5904 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:02:22.840045    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:22.844728    5904 main.go:141] libmachine: Using SSH client type: native
	I0910 18:02:22.845254    5904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:02:22.845254    5904 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-879800 && echo "functional-879800" | sudo tee /etc/hostname
	I0910 18:02:23.011434    5904 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-879800
	
	I0910 18:02:23.011632    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:02:24.936509    5904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:02:24.936509    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:24.936509    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:02:27.187444    5904 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:02:27.187504    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:27.191066    5904 main.go:141] libmachine: Using SSH client type: native
	I0910 18:02:27.191066    5904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:02:27.191066    5904 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-879800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-879800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-879800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:02:27.331040    5904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:02:27.331098    5904 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0910 18:02:27.331222    5904 buildroot.go:174] setting up certificates
	I0910 18:02:27.331275    5904 provision.go:84] configureAuth start
	I0910 18:02:27.331366    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:02:29.253579    5904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:02:29.253646    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:29.253712    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:02:31.530591    5904 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:02:31.530591    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:31.531216    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:02:33.394163    5904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:02:33.394163    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:33.394433    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:02:35.664349    5904 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:02:35.664349    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:35.665299    5904 provision.go:143] copyHostCerts
	I0910 18:02:35.665447    5904 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0910 18:02:35.665756    5904 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0910 18:02:35.665822    5904 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0910 18:02:35.666207    5904 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0910 18:02:35.667284    5904 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0910 18:02:35.667515    5904 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0910 18:02:35.667515    5904 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0910 18:02:35.667833    5904 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0910 18:02:35.668799    5904 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0910 18:02:35.668799    5904 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0910 18:02:35.668799    5904 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0910 18:02:35.669343    5904 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0910 18:02:35.670153    5904 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-879800 san=[127.0.0.1 172.31.208.92 functional-879800 localhost minikube]
	I0910 18:02:35.945105    5904 provision.go:177] copyRemoteCerts
	I0910 18:02:35.953896    5904 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:02:35.953896    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:02:37.845257    5904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:02:37.845257    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:37.846037    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:02:40.123950    5904 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:02:40.124497    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:40.124497    5904 sshutil.go:53] new ssh client: &{IP:172.31.208.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-879800\id_rsa Username:docker}
	I0910 18:02:40.231141    5904 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.276956s)
	I0910 18:02:40.231141    5904 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0910 18:02:40.231831    5904 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 18:02:40.272476    5904 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0910 18:02:40.272849    5904 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0910 18:02:40.319270    5904 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0910 18:02:40.319599    5904 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 18:02:40.361279    5904 provision.go:87] duration metric: took 13.0290307s to configureAuth
	I0910 18:02:40.361279    5904 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:02:40.361933    5904 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:02:40.362007    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:02:42.223293    5904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:02:42.223424    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:42.223512    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:02:44.488483    5904 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:02:44.489253    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:44.493043    5904 main.go:141] libmachine: Using SSH client type: native
	I0910 18:02:44.493620    5904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:02:44.493620    5904 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 18:02:44.631292    5904 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 18:02:44.631836    5904 buildroot.go:70] root file system type: tmpfs
	I0910 18:02:44.632078    5904 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 18:02:44.632676    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:02:46.500291    5904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:02:46.500291    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:46.500529    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:02:48.785709    5904 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:02:48.786709    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:48.791351    5904 main.go:141] libmachine: Using SSH client type: native
	I0910 18:02:48.791968    5904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:02:48.791968    5904 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 18:02:48.961686    5904 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 18:02:48.961791    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:02:50.860838    5904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:02:50.861854    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:50.862135    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:02:53.106440    5904 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:02:53.107316    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:53.112789    5904 main.go:141] libmachine: Using SSH client type: native
	I0910 18:02:53.112789    5904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:02:53.112789    5904 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 18:02:53.263978    5904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:02:53.263978    5904 machine.go:96] duration metric: took 39.2171273s to provisionDockerMachine
	I0910 18:02:53.263978    5904 start.go:293] postStartSetup for "functional-879800" (driver="hyperv")
	I0910 18:02:53.263978    5904 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:02:53.277016    5904 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:02:53.277016    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:02:55.142379    5904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:02:55.142379    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:55.142379    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:02:57.392000    5904 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:02:57.392000    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:57.392620    5904 sshutil.go:53] new ssh client: &{IP:172.31.208.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-879800\id_rsa Username:docker}
	I0910 18:02:57.502794    5904 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2254925s)
	I0910 18:02:57.515359    5904 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:02:57.522486    5904 command_runner.go:130] > NAME=Buildroot
	I0910 18:02:57.522594    5904 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0910 18:02:57.522594    5904 command_runner.go:130] > ID=buildroot
	I0910 18:02:57.522594    5904 command_runner.go:130] > VERSION_ID=2023.02.9
	I0910 18:02:57.522594    5904 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0910 18:02:57.522594    5904 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:02:57.522713    5904 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0910 18:02:57.522913    5904 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0910 18:02:57.523522    5904 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> 47242.pem in /etc/ssl/certs
	I0910 18:02:57.523522    5904 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /etc/ssl/certs/47242.pem
	I0910 18:02:57.524138    5904 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\4724\hosts -> hosts in /etc/test/nested/copy/4724
	I0910 18:02:57.524196    5904 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\4724\hosts -> /etc/test/nested/copy/4724/hosts
	I0910 18:02:57.532206    5904 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4724
	I0910 18:02:57.548999    5904 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /etc/ssl/certs/47242.pem (1708 bytes)
	I0910 18:02:57.592465    5904 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\4724\hosts --> /etc/test/nested/copy/4724/hosts (40 bytes)
	I0910 18:02:57.638476    5904 start.go:296] duration metric: took 4.3742014s for postStartSetup
	I0910 18:02:57.638547    5904 fix.go:56] duration metric: took 46.1391584s for fixHost
	I0910 18:02:57.638613    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:02:59.509447    5904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:02:59.509531    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:02:59.509604    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:03:01.780593    5904 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:03:01.780593    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:03:01.784844    5904 main.go:141] libmachine: Using SSH client type: native
	I0910 18:03:01.785191    5904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:03:01.785191    5904 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:03:01.919484    5904 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725991382.144455076
	
	I0910 18:03:01.919558    5904 fix.go:216] guest clock: 1725991382.144455076
	I0910 18:03:01.919630    5904 fix.go:229] Guest: 2024-09-10 18:03:02.144455076 +0000 UTC Remote: 2024-09-10 18:02:57.6385476 +0000 UTC m=+51.202847101 (delta=4.505907476s)
	I0910 18:03:01.919782    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:03:03.814254    5904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:03:03.814254    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:03:03.815256    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:03:06.169450    5904 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:03:06.169510    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:03:06.175615    5904 main.go:141] libmachine: Using SSH client type: native
	I0910 18:03:06.175615    5904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:03:06.176155    5904 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1725991381
	I0910 18:03:06.330364    5904 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Sep 10 18:03:01 UTC 2024
	
	I0910 18:03:06.330493    5904 fix.go:236] clock set: Tue Sep 10 18:03:01 UTC 2024
	 (err=<nil>)
	I0910 18:03:06.330493    5904 start.go:83] releasing machines lock for "functional-879800", held for 54.8306534s
	I0910 18:03:06.330811    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:03:08.243889    5904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:03:08.243889    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:03:08.244747    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:03:10.513691    5904 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:03:10.513890    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:03:10.516646    5904 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0910 18:03:10.516646    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:03:10.523188    5904 ssh_runner.go:195] Run: cat /version.json
	I0910 18:03:10.523188    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:03:12.482301    5904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:03:12.482301    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:03:12.483070    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:03:12.484389    5904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:03:12.484389    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:03:12.484389    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:03:14.840926    5904 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:03:14.840926    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:03:14.841922    5904 sshutil.go:53] new ssh client: &{IP:172.31.208.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-879800\id_rsa Username:docker}
	I0910 18:03:14.866117    5904 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:03:14.866632    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:03:14.866708    5904 sshutil.go:53] new ssh client: &{IP:172.31.208.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-879800\id_rsa Username:docker}
	I0910 18:03:14.933442    5904 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0910 18:03:14.933508    5904 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.4165616s)
	W0910 18:03:14.933508    5904 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0910 18:03:14.966266    5904 command_runner.go:130] > {"iso_version": "v1.34.0-1725912912-19598", "kicbase_version": "v0.0.45", "minikube_version": "v1.34.0", "commit": "a47e98bacf93197560d0f08408949de0434951d5"}
	I0910 18:03:14.966266    5904 ssh_runner.go:235] Completed: cat /version.json: (4.4427771s)
	I0910 18:03:14.974884    5904 ssh_runner.go:195] Run: systemctl --version
	I0910 18:03:14.983099    5904 command_runner.go:130] > systemd 252 (252)
	I0910 18:03:14.983208    5904 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0910 18:03:14.990836    5904 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0910 18:03:14.999702    5904 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0910 18:03:15.000267    5904 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:03:15.008801    5904 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:03:15.026617    5904 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0910 18:03:15.026617    5904 start.go:495] detecting cgroup driver to use...
	I0910 18:03:15.026818    5904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0910 18:03:15.031024    5904 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0910 18:03:15.031929    5904 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0910 18:03:15.063490    5904 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0910 18:03:15.075479    5904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 18:03:15.107012    5904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 18:03:15.127609    5904 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 18:03:15.139303    5904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 18:03:15.172904    5904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 18:03:15.209503    5904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 18:03:15.241995    5904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 18:03:15.269169    5904 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:03:15.299166    5904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 18:03:15.329697    5904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 18:03:15.354282    5904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 18:03:15.382847    5904 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:03:15.399858    5904 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0910 18:03:15.407846    5904 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:03:15.432139    5904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:03:15.659131    5904 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 18:03:15.692080    5904 start.go:495] detecting cgroup driver to use...
	I0910 18:03:15.705023    5904 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 18:03:15.727447    5904 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0910 18:03:15.727447    5904 command_runner.go:130] > [Unit]
	I0910 18:03:15.727563    5904 command_runner.go:130] > Description=Docker Application Container Engine
	I0910 18:03:15.727563    5904 command_runner.go:130] > Documentation=https://docs.docker.com
	I0910 18:03:15.727563    5904 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0910 18:03:15.727563    5904 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0910 18:03:15.727563    5904 command_runner.go:130] > StartLimitBurst=3
	I0910 18:03:15.727563    5904 command_runner.go:130] > StartLimitIntervalSec=60
	I0910 18:03:15.727563    5904 command_runner.go:130] > [Service]
	I0910 18:03:15.727563    5904 command_runner.go:130] > Type=notify
	I0910 18:03:15.727563    5904 command_runner.go:130] > Restart=on-failure
	I0910 18:03:15.727676    5904 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0910 18:03:15.727676    5904 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0910 18:03:15.727676    5904 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0910 18:03:15.727751    5904 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0910 18:03:15.727814    5904 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0910 18:03:15.727814    5904 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0910 18:03:15.727814    5904 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0910 18:03:15.727814    5904 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0910 18:03:15.727921    5904 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0910 18:03:15.727921    5904 command_runner.go:130] > ExecStart=
	I0910 18:03:15.727921    5904 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0910 18:03:15.727921    5904 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0910 18:03:15.727921    5904 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0910 18:03:15.728040    5904 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0910 18:03:15.728040    5904 command_runner.go:130] > LimitNOFILE=infinity
	I0910 18:03:15.728040    5904 command_runner.go:130] > LimitNPROC=infinity
	I0910 18:03:15.728040    5904 command_runner.go:130] > LimitCORE=infinity
	I0910 18:03:15.728040    5904 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0910 18:03:15.728040    5904 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0910 18:03:15.728040    5904 command_runner.go:130] > TasksMax=infinity
	I0910 18:03:15.728040    5904 command_runner.go:130] > TimeoutStartSec=0
	I0910 18:03:15.728150    5904 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0910 18:03:15.728150    5904 command_runner.go:130] > Delegate=yes
	I0910 18:03:15.728150    5904 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0910 18:03:15.728150    5904 command_runner.go:130] > KillMode=process
	I0910 18:03:15.728150    5904 command_runner.go:130] > [Install]
	I0910 18:03:15.728150    5904 command_runner.go:130] > WantedBy=multi-user.target
	I0910 18:03:15.737273    5904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:03:15.768191    5904 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:03:15.811704    5904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:03:15.843049    5904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 18:03:15.864946    5904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:03:15.893769    5904 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0910 18:03:15.903602    5904 ssh_runner.go:195] Run: which cri-dockerd
	I0910 18:03:15.908500    5904 command_runner.go:130] > /usr/bin/cri-dockerd
	I0910 18:03:15.920181    5904 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 18:03:15.935471    5904 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 18:03:15.971375    5904 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 18:03:16.197116    5904 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 18:03:16.413404    5904 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 18:03:16.413760    5904 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 18:03:16.455435    5904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:03:16.682525    5904 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 18:03:29.536293    5904 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.8528972s)
	I0910 18:03:29.548818    5904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 18:03:29.582289    5904 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0910 18:03:29.623075    5904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 18:03:29.655635    5904 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 18:03:29.864054    5904 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 18:03:30.046467    5904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:03:30.232787    5904 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 18:03:30.269923    5904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 18:03:30.300331    5904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:03:30.493673    5904 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 18:03:30.602332    5904 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 18:03:30.612518    5904 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 18:03:30.619548    5904 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0910 18:03:30.619548    5904 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0910 18:03:30.619548    5904 command_runner.go:130] > Device: 0,22	Inode: 1503        Links: 1
	I0910 18:03:30.619548    5904 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0910 18:03:30.619548    5904 command_runner.go:130] > Access: 2024-09-10 18:03:30.756747046 +0000
	I0910 18:03:30.619548    5904 command_runner.go:130] > Modify: 2024-09-10 18:03:30.739745665 +0000
	I0910 18:03:30.619548    5904 command_runner.go:130] > Change: 2024-09-10 18:03:30.741745828 +0000
	I0910 18:03:30.619548    5904 command_runner.go:130] >  Birth: -
	I0910 18:03:30.619548    5904 start.go:563] Will wait 60s for crictl version
	I0910 18:03:30.632519    5904 ssh_runner.go:195] Run: which crictl
	I0910 18:03:30.638550    5904 command_runner.go:130] > /usr/bin/crictl
	I0910 18:03:30.646520    5904 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:03:30.732206    5904 command_runner.go:130] > Version:  0.1.0
	I0910 18:03:30.732279    5904 command_runner.go:130] > RuntimeName:  docker
	I0910 18:03:30.732279    5904 command_runner.go:130] > RuntimeVersion:  27.2.0
	I0910 18:03:30.732279    5904 command_runner.go:130] > RuntimeApiVersion:  v1
	I0910 18:03:30.732374    5904 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0910 18:03:30.742960    5904 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 18:03:30.807718    5904 command_runner.go:130] > 27.2.0
	I0910 18:03:30.816697    5904 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 18:03:30.869088    5904 command_runner.go:130] > 27.2.0
	I0910 18:03:30.873572    5904 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0910 18:03:30.873572    5904 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0910 18:03:30.877533    5904 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0910 18:03:30.877533    5904 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0910 18:03:30.877533    5904 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0910 18:03:30.877533    5904 ip.go:211] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:36:e6 Flags:up|broadcast|multicast|running}
	I0910 18:03:30.880095    5904 ip.go:214] interface addr: fe80::bc85:cd58:cadb:2533/64
	I0910 18:03:30.880095    5904 ip.go:214] interface addr: 172.31.208.1/20
	I0910 18:03:30.888104    5904 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0910 18:03:30.895123    5904 command_runner.go:130] > 172.31.208.1	host.minikube.internal
	I0910 18:03:30.896203    5904 kubeadm.go:883] updating cluster {Name:functional-879800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.31.0 ClusterName:functional-879800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.92 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:03:30.896203    5904 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 18:03:30.902593    5904 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 18:03:30.940989    5904 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:03:30.941053    5904 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:03:30.941114    5904 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:03:30.941114    5904 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:03:30.941114    5904 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0910 18:03:30.941114    5904 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0910 18:03:30.941114    5904 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:03:30.941114    5904 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:03:30.941209    5904 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0910 18:03:30.941249    5904 docker.go:615] Images already preloaded, skipping extraction
	I0910 18:03:30.948674    5904 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 18:03:30.977519    5904 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:03:30.977604    5904 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:03:30.977604    5904 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:03:30.977604    5904 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:03:30.977670    5904 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0910 18:03:30.977670    5904 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0910 18:03:30.977670    5904 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:03:30.977670    5904 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:03:30.977740    5904 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0910 18:03:30.977740    5904 cache_images.go:84] Images are preloaded, skipping loading
	I0910 18:03:30.977858    5904 kubeadm.go:934] updating node { 172.31.208.92 8441 v1.31.0 docker true true} ...
	I0910 18:03:30.978131    5904 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-879800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.208.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:functional-879800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:03:30.989981    5904 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0910 18:03:31.097580    5904 command_runner.go:130] > cgroupfs
	I0910 18:03:31.097580    5904 cni.go:84] Creating CNI manager for ""
	I0910 18:03:31.097580    5904 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 18:03:31.097580    5904 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:03:31.097580    5904 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.31.208.92 APIServerPort:8441 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-879800 NodeName:functional-879800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.31.208.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.31.208.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:03:31.098147    5904 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.31.208.92
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-879800"
	  kubeletExtraArgs:
	    node-ip: 172.31.208.92
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.31.208.92"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:03:31.109394    5904 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:03:31.150053    5904 command_runner.go:130] > kubeadm
	I0910 18:03:31.150103    5904 command_runner.go:130] > kubectl
	I0910 18:03:31.150103    5904 command_runner.go:130] > kubelet
	I0910 18:03:31.150184    5904 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:03:31.159685    5904 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:03:31.180292    5904 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0910 18:03:31.229627    5904 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:03:31.261141    5904 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0910 18:03:31.308855    5904 ssh_runner.go:195] Run: grep 172.31.208.92	control-plane.minikube.internal$ /etc/hosts
	I0910 18:03:31.314486    5904 command_runner.go:130] > 172.31.208.92	control-plane.minikube.internal
	I0910 18:03:31.323253    5904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:03:31.611318    5904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:03:31.635717    5904 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-879800 for IP: 172.31.208.92
	I0910 18:03:31.635809    5904 certs.go:194] generating shared ca certs ...
	I0910 18:03:31.635848    5904 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:03:31.636835    5904 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0910 18:03:31.637158    5904 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0910 18:03:31.637158    5904 certs.go:256] generating profile certs ...
	I0910 18:03:31.638369    5904 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-879800\client.key
	I0910 18:03:31.638945    5904 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-879800\apiserver.key.aa745d38
	I0910 18:03:31.639383    5904 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-879800\proxy-client.key
	I0910 18:03:31.639476    5904 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 18:03:31.639573    5904 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0910 18:03:31.639673    5904 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 18:03:31.639766    5904 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 18:03:31.639854    5904 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-879800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 18:03:31.639954    5904 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-879800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 18:03:31.640053    5904 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-879800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 18:03:31.640163    5904 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-879800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 18:03:31.640503    5904 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem (1338 bytes)
	W0910 18:03:31.640771    5904 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724_empty.pem, impossibly tiny 0 bytes
	I0910 18:03:31.640814    5904 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0910 18:03:31.641085    5904 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0910 18:03:31.641281    5904 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0910 18:03:31.641470    5904 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0910 18:03:31.641819    5904 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem (1708 bytes)
	I0910 18:03:31.641894    5904 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:03:31.641894    5904 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem -> /usr/share/ca-certificates/4724.pem
	I0910 18:03:31.641894    5904 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /usr/share/ca-certificates/47242.pem
	I0910 18:03:31.643173    5904 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:03:31.715053    5904 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0910 18:03:31.788300    5904 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:03:31.850399    5904 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 18:03:31.927798    5904 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-879800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0910 18:03:32.022342    5904 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-879800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 18:03:32.098816    5904 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-879800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:03:32.269011    5904 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-879800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 18:03:32.413824    5904 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:03:32.556824    5904 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem --> /usr/share/ca-certificates/4724.pem (1338 bytes)
	I0910 18:03:32.636377    5904 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /usr/share/ca-certificates/47242.pem (1708 bytes)
	I0910 18:03:32.695437    5904 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:03:32.753696    5904 ssh_runner.go:195] Run: openssl version
	I0910 18:03:32.762634    5904 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0910 18:03:32.771331    5904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47242.pem && ln -fs /usr/share/ca-certificates/47242.pem /etc/ssl/certs/47242.pem"
	I0910 18:03:32.801344    5904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47242.pem
	I0910 18:03:32.808356    5904 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 18:03:32.808754    5904 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 18:03:32.817350    5904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47242.pem
	I0910 18:03:32.825373    5904 command_runner.go:130] > 3ec20f2e
	I0910 18:03:32.834129    5904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47242.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:03:32.858099    5904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:03:32.892096    5904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:03:32.903578    5904 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:03:32.903578    5904 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:03:32.913073    5904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:03:32.928069    5904 command_runner.go:130] > b5213941
	I0910 18:03:32.941971    5904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:03:32.968604    5904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4724.pem && ln -fs /usr/share/ca-certificates/4724.pem /etc/ssl/certs/4724.pem"
	I0910 18:03:32.994965    5904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4724.pem
	I0910 18:03:33.000954    5904 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 18:03:33.002184    5904 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 18:03:33.010800    5904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4724.pem
	I0910 18:03:33.018328    5904 command_runner.go:130] > 51391683
	I0910 18:03:33.027544    5904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4724.pem /etc/ssl/certs/51391683.0"
	I0910 18:03:33.074114    5904 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:03:33.081083    5904 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:03:33.081083    5904 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0910 18:03:33.081083    5904 command_runner.go:130] > Device: 8,1	Inode: 8383809     Links: 1
	I0910 18:03:33.081083    5904 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0910 18:03:33.081083    5904 command_runner.go:130] > Access: 2024-09-10 18:01:06.987077609 +0000
	I0910 18:03:33.081083    5904 command_runner.go:130] > Modify: 2024-09-10 18:01:06.987077609 +0000
	I0910 18:03:33.081083    5904 command_runner.go:130] > Change: 2024-09-10 18:01:06.987077609 +0000
	I0910 18:03:33.081083    5904 command_runner.go:130] >  Birth: 2024-09-10 18:01:06.987077609 +0000
	I0910 18:03:33.091568    5904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:03:33.101485    5904 command_runner.go:130] > Certificate will not expire
	I0910 18:03:33.110025    5904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:03:33.130538    5904 command_runner.go:130] > Certificate will not expire
	I0910 18:03:33.143288    5904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:03:33.155749    5904 command_runner.go:130] > Certificate will not expire
	I0910 18:03:33.164459    5904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:03:33.175586    5904 command_runner.go:130] > Certificate will not expire
	I0910 18:03:33.185601    5904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:03:33.194643    5904 command_runner.go:130] > Certificate will not expire
	I0910 18:03:33.206992    5904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:03:33.215690    5904 command_runner.go:130] > Certificate will not expire
	I0910 18:03:33.216183    5904 kubeadm.go:392] StartCluster: {Name:functional-879800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:functional-879800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.92 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:03:33.223544    5904 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 18:03:33.280378    5904 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:03:33.308029    5904 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0910 18:03:33.308208    5904 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0910 18:03:33.308208    5904 command_runner.go:130] > /var/lib/minikube/etcd:
	I0910 18:03:33.308208    5904 command_runner.go:130] > member
	I0910 18:03:33.308345    5904 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 18:03:33.308345    5904 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 18:03:33.318809    5904 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 18:03:33.341668    5904 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:03:33.342677    5904 kubeconfig.go:125] found "functional-879800" server: "https://172.31.208.92:8441"
	I0910 18:03:33.345021    5904 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:03:33.345684    5904 kapi.go:59] client config for functional-879800: &rest.Config{Host:"https://172.31.208.92:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 18:03:33.346934    5904 cert_rotation.go:140] Starting client certificate rotation controller
	I0910 18:03:33.356817    5904 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 18:03:33.378720    5904 kubeadm.go:630] The running cluster does not require reconfiguration: 172.31.208.92
	I0910 18:03:33.378720    5904 kubeadm.go:1160] stopping kube-system containers ...
	I0910 18:03:33.387240    5904 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 18:03:33.438480    5904 command_runner.go:130] > 37ad76e1635d
	I0910 18:03:33.438534    5904 command_runner.go:130] > 5156bce01a2c
	I0910 18:03:33.438534    5904 command_runner.go:130] > d9b410da7bc7
	I0910 18:03:33.438534    5904 command_runner.go:130] > 07900b60b5ab
	I0910 18:03:33.438586    5904 command_runner.go:130] > ab25d8c1a3f8
	I0910 18:03:33.438586    5904 command_runner.go:130] > 6e134f080fb7
	I0910 18:03:33.438586    5904 command_runner.go:130] > ec5fefa9ec4e
	I0910 18:03:33.438586    5904 command_runner.go:130] > 976499ca237d
	I0910 18:03:33.438662    5904 command_runner.go:130] > e3923829bb15
	I0910 18:03:33.438662    5904 command_runner.go:130] > a3e714586dcf
	I0910 18:03:33.438662    5904 command_runner.go:130] > ccc45d358274
	I0910 18:03:33.438662    5904 command_runner.go:130] > 6e715701f9a8
	I0910 18:03:33.438715    5904 command_runner.go:130] > c3223d917882
	I0910 18:03:33.438715    5904 command_runner.go:130] > 8b5b13f2cec8
	I0910 18:03:33.438715    5904 command_runner.go:130] > 2fded56066ba
	I0910 18:03:33.438715    5904 command_runner.go:130] > f4c45e58e5ad
	I0910 18:03:33.438715    5904 command_runner.go:130] > 0fe7875f1ea8
	I0910 18:03:33.438715    5904 command_runner.go:130] > b0e69451632d
	I0910 18:03:33.438715    5904 command_runner.go:130] > c0055974541d
	I0910 18:03:33.438715    5904 command_runner.go:130] > 443d0b1bfd8f
	I0910 18:03:33.438715    5904 command_runner.go:130] > 0e83830f0f0e
	I0910 18:03:33.438715    5904 command_runner.go:130] > 935ff5cd9429
	I0910 18:03:33.438715    5904 command_runner.go:130] > c72de8d88c7a
	I0910 18:03:33.438715    5904 command_runner.go:130] > afe36a9485ea
	I0910 18:03:33.438715    5904 command_runner.go:130] > 7acd3a1687c6
	I0910 18:03:33.438715    5904 command_runner.go:130] > d1a139975a68
	I0910 18:03:33.438715    5904 command_runner.go:130] > 19fb530e5214
	I0910 18:03:33.438715    5904 command_runner.go:130] > 9040449f0077
	I0910 18:03:33.438715    5904 docker.go:483] Stopping containers: [37ad76e1635d 5156bce01a2c d9b410da7bc7 07900b60b5ab ab25d8c1a3f8 6e134f080fb7 ec5fefa9ec4e 976499ca237d e3923829bb15 a3e714586dcf ccc45d358274 6e715701f9a8 c3223d917882 8b5b13f2cec8 2fded56066ba f4c45e58e5ad 0fe7875f1ea8 b0e69451632d c0055974541d 443d0b1bfd8f 0e83830f0f0e 935ff5cd9429 c72de8d88c7a afe36a9485ea 7acd3a1687c6 d1a139975a68 19fb530e5214 9040449f0077]
	I0910 18:03:33.448182    5904 ssh_runner.go:195] Run: docker stop 37ad76e1635d 5156bce01a2c d9b410da7bc7 07900b60b5ab ab25d8c1a3f8 6e134f080fb7 ec5fefa9ec4e 976499ca237d e3923829bb15 a3e714586dcf ccc45d358274 6e715701f9a8 c3223d917882 8b5b13f2cec8 2fded56066ba f4c45e58e5ad 0fe7875f1ea8 b0e69451632d c0055974541d 443d0b1bfd8f 0e83830f0f0e 935ff5cd9429 c72de8d88c7a afe36a9485ea 7acd3a1687c6 d1a139975a68 19fb530e5214 9040449f0077
	I0910 18:03:43.171993    5904 command_runner.go:130] > 37ad76e1635d
	I0910 18:03:43.171993    5904 command_runner.go:130] > 5156bce01a2c
	I0910 18:03:43.171993    5904 command_runner.go:130] > d9b410da7bc7
	I0910 18:03:43.171993    5904 command_runner.go:130] > 07900b60b5ab
	I0910 18:03:43.171993    5904 command_runner.go:130] > ab25d8c1a3f8
	I0910 18:03:43.171993    5904 command_runner.go:130] > 6e134f080fb7
	I0910 18:03:43.171993    5904 command_runner.go:130] > ec5fefa9ec4e
	I0910 18:03:43.171993    5904 command_runner.go:130] > 976499ca237d
	I0910 18:03:43.171993    5904 command_runner.go:130] > e3923829bb15
	I0910 18:03:43.171993    5904 command_runner.go:130] > a3e714586dcf
	I0910 18:03:43.171993    5904 command_runner.go:130] > ccc45d358274
	I0910 18:03:43.171993    5904 command_runner.go:130] > 6e715701f9a8
	I0910 18:03:43.171993    5904 command_runner.go:130] > c3223d917882
	I0910 18:03:43.171993    5904 command_runner.go:130] > 8b5b13f2cec8
	I0910 18:03:43.171993    5904 command_runner.go:130] > 2fded56066ba
	I0910 18:03:43.171993    5904 command_runner.go:130] > f4c45e58e5ad
	I0910 18:03:43.171993    5904 command_runner.go:130] > 0fe7875f1ea8
	I0910 18:03:43.171993    5904 command_runner.go:130] > b0e69451632d
	I0910 18:03:43.171993    5904 command_runner.go:130] > c0055974541d
	I0910 18:03:43.171993    5904 command_runner.go:130] > 443d0b1bfd8f
	I0910 18:03:43.171993    5904 command_runner.go:130] > 0e83830f0f0e
	I0910 18:03:43.171993    5904 command_runner.go:130] > 935ff5cd9429
	I0910 18:03:43.172523    5904 command_runner.go:130] > c72de8d88c7a
	I0910 18:03:43.172523    5904 command_runner.go:130] > afe36a9485ea
	I0910 18:03:43.172523    5904 command_runner.go:130] > 7acd3a1687c6
	I0910 18:03:43.172523    5904 command_runner.go:130] > d1a139975a68
	I0910 18:03:43.172523    5904 command_runner.go:130] > 19fb530e5214
	I0910 18:03:43.172523    5904 command_runner.go:130] > 9040449f0077
	I0910 18:03:43.172701    5904 ssh_runner.go:235] Completed: docker stop 37ad76e1635d 5156bce01a2c d9b410da7bc7 07900b60b5ab ab25d8c1a3f8 6e134f080fb7 ec5fefa9ec4e 976499ca237d e3923829bb15 a3e714586dcf ccc45d358274 6e715701f9a8 c3223d917882 8b5b13f2cec8 2fded56066ba f4c45e58e5ad 0fe7875f1ea8 b0e69451632d c0055974541d 443d0b1bfd8f 0e83830f0f0e 935ff5cd9429 c72de8d88c7a afe36a9485ea 7acd3a1687c6 d1a139975a68 19fb530e5214 9040449f0077: (9.7237716s)
	I0910 18:03:43.182341    5904 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 18:03:43.243581    5904 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:03:43.261839    5904 command_runner.go:130] > -rw------- 1 root root 5651 Sep 10 18:01 /etc/kubernetes/admin.conf
	I0910 18:03:43.261839    5904 command_runner.go:130] > -rw------- 1 root root 5657 Sep 10 18:01 /etc/kubernetes/controller-manager.conf
	I0910 18:03:43.261927    5904 command_runner.go:130] > -rw------- 1 root root 2007 Sep 10 18:01 /etc/kubernetes/kubelet.conf
	I0910 18:03:43.261927    5904 command_runner.go:130] > -rw------- 1 root root 5601 Sep 10 18:01 /etc/kubernetes/scheduler.conf
	I0910 18:03:43.261927    5904 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Sep 10 18:01 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Sep 10 18:01 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Sep 10 18:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Sep 10 18:01 /etc/kubernetes/scheduler.conf
	
	I0910 18:03:43.274929    5904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0910 18:03:43.290207    5904 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0910 18:03:43.304525    5904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0910 18:03:43.321690    5904 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0910 18:03:43.330493    5904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0910 18:03:43.348938    5904 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:03:43.360617    5904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:03:43.385076    5904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0910 18:03:43.402099    5904 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:03:43.414282    5904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:03:43.441331    5904 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:03:43.461854    5904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:03:43.518134    5904 command_runner.go:130] ! W0910 18:03:43.744847    6050 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 18:03:43.519974    5904 command_runner.go:130] ! W0910 18:03:43.745895    6050 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 18:03:43.528915    5904 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 18:03:43.529003    5904 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0910 18:03:43.529003    5904 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0910 18:03:43.529003    5904 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 18:03:43.529003    5904 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0910 18:03:43.529003    5904 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0910 18:03:43.529003    5904 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0910 18:03:43.529003    5904 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0910 18:03:43.529003    5904 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0910 18:03:43.529120    5904 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 18:03:43.529120    5904 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 18:03:43.529120    5904 command_runner.go:130] > [certs] Using the existing "sa" key
	I0910 18:03:43.529120    5904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:03:43.586922    5904 command_runner.go:130] ! W0910 18:03:43.811716    6055 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 18:03:43.586922    5904 command_runner.go:130] ! W0910 18:03:43.812423    6055 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 18:03:44.918278    5904 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 18:03:44.918370    5904 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0910 18:03:44.918370    5904 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0910 18:03:44.918370    5904 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0910 18:03:44.918370    5904 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 18:03:44.918438    5904 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 18:03:44.918475    5904 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.3892246s)
	I0910 18:03:44.918475    5904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:03:44.982002    5904 command_runner.go:130] ! W0910 18:03:45.207504    6060 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 18:03:44.982826    5904 command_runner.go:130] ! W0910 18:03:45.208728    6060 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 18:03:45.236999    5904 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 18:03:45.236999    5904 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 18:03:45.236999    5904 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0910 18:03:45.236999    5904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:03:45.300219    5904 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 18:03:45.300219    5904 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 18:03:45.309214    5904 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 18:03:45.313292    5904 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 18:03:45.318701    5904 command_runner.go:130] ! W0910 18:03:45.520308    6087 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 18:03:45.318701    5904 command_runner.go:130] ! W0910 18:03:45.521554    6087 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 18:03:45.318701    5904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:03:45.390717    5904 command_runner.go:130] ! W0910 18:03:45.616071    6095 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 18:03:45.392129    5904 command_runner.go:130] ! W0910 18:03:45.617051    6095 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 18:03:45.401099    5904 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 18:03:45.401295    5904 api_server.go:52] waiting for apiserver process to appear ...
	I0910 18:03:45.414136    5904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:03:45.448815    5904 command_runner.go:130] > 5959
	I0910 18:03:45.449418    5904 api_server.go:72] duration metric: took 47.587ms to wait for apiserver process to appear ...
	I0910 18:03:45.449418    5904 api_server.go:88] waiting for apiserver healthz status ...
	I0910 18:03:45.449479    5904 api_server.go:253] Checking apiserver healthz at https://172.31.208.92:8441/healthz ...
	I0910 18:03:50.460007    5904 api_server.go:269] stopped: https://172.31.208.92:8441/healthz: Get "https://172.31.208.92:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 18:03:50.460179    5904 api_server.go:253] Checking apiserver healthz at https://172.31.208.92:8441/healthz ...
	I0910 18:03:55.462329    5904 api_server.go:269] stopped: https://172.31.208.92:8441/healthz: Get "https://172.31.208.92:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 18:03:55.462329    5904 api_server.go:253] Checking apiserver healthz at https://172.31.208.92:8441/healthz ...
	I0910 18:03:58.845034    5904 api_server.go:269] stopped: https://172.31.208.92:8441/healthz: Get "https://172.31.208.92:8441/healthz": read tcp 172.31.208.1:63585->172.31.208.92:8441: wsarecv: An existing connection was forcibly closed by the remote host.
	I0910 18:03:58.845511    5904 api_server.go:253] Checking apiserver healthz at https://172.31.208.92:8441/healthz ...
	I0910 18:04:00.872782    5904 api_server.go:269] stopped: https://172.31.208.92:8441/healthz: Get "https://172.31.208.92:8441/healthz": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
	I0910 18:04:00.872782    5904 api_server.go:253] Checking apiserver healthz at https://172.31.208.92:8441/healthz ...
	I0910 18:04:03.236470    5904 api_server.go:279] https://172.31.208.92:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 18:04:03.236719    5904 api_server.go:103] status: https://172.31.208.92:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 18:04:03.236771    5904 api_server.go:253] Checking apiserver healthz at https://172.31.208.92:8441/healthz ...
	I0910 18:04:03.286253    5904 api_server.go:279] https://172.31.208.92:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:04:03.286253    5904 api_server.go:103] status: https://172.31.208.92:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:04:03.455563    5904 api_server.go:253] Checking apiserver healthz at https://172.31.208.92:8441/healthz ...
	I0910 18:04:03.464598    5904 api_server.go:279] https://172.31.208.92:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:04:03.464706    5904 api_server.go:103] status: https://172.31.208.92:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:04:03.959246    5904 api_server.go:253] Checking apiserver healthz at https://172.31.208.92:8441/healthz ...
	I0910 18:04:03.971462    5904 api_server.go:279] https://172.31.208.92:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:04:03.971462    5904 api_server.go:103] status: https://172.31.208.92:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:04:04.451125    5904 api_server.go:253] Checking apiserver healthz at https://172.31.208.92:8441/healthz ...
	I0910 18:04:04.459568    5904 api_server.go:279] https://172.31.208.92:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:04:04.459568    5904 api_server.go:103] status: https://172.31.208.92:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:04:04.959186    5904 api_server.go:253] Checking apiserver healthz at https://172.31.208.92:8441/healthz ...
	I0910 18:04:04.967216    5904 api_server.go:279] https://172.31.208.92:8441/healthz returned 200:
	ok
	I0910 18:04:04.967768    5904 round_trippers.go:463] GET https://172.31.208.92:8441/version
	I0910 18:04:04.967768    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:04.967768    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:04.967768    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:04.978013    5904 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0910 18:04:04.978013    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:04.978013    5904 round_trippers.go:580]     Audit-Id: c41c3ade-f82a-4217-a7a8-2a11a5e67ae9
	I0910 18:04:04.978013    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:04.978013    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:04.978013    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:04.978013    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:04.978013    5904 round_trippers.go:580]     Content-Length: 263
	I0910 18:04:04.978013    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:05 GMT
	I0910 18:04:04.978013    5904 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.0",
	  "gitCommit": "9edcffcde5595e8a5b1a35f88c421764e575afce",
	  "gitTreeState": "clean",
	  "buildDate": "2024-08-13T07:28:49Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0910 18:04:04.978013    5904 api_server.go:141] control plane version: v1.31.0
	I0910 18:04:04.978013    5904 api_server.go:131] duration metric: took 19.5272709s to wait for apiserver health ...
	I0910 18:04:04.978013    5904 cni.go:84] Creating CNI manager for ""
	I0910 18:04:04.978013    5904 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 18:04:04.980008    5904 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 18:04:04.991012    5904 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 18:04:05.011044    5904 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 18:04:05.047185    5904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 18:04:05.047406    5904 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0910 18:04:05.047484    5904 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0910 18:04:05.047666    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods
	I0910 18:04:05.047727    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:05.047727    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:05.047727    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:05.054649    5904 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:04:05.054776    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:05.054776    5904 round_trippers.go:580]     Audit-Id: fb67992b-66c8-4784-a31c-656cad6b601a
	I0910 18:04:05.054776    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:05.054776    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:05.054776    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:05.054776    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:05.054776    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:05 GMT
	I0910 18:04:05.055593    5904 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"552"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-266t8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"e2d0f1c5-7959-4f05-a592-c427855eb2da","resourceVersion":"502","creationTimestamp":"2024-09-10T18:01:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"e617221a-3d59-4054-85fe-856d80505706","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e617221a-3d59-4054-85fe-856d80505706\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52638 chars]
	I0910 18:04:05.060414    5904 system_pods.go:59] 7 kube-system pods found
	I0910 18:04:05.060414    5904 system_pods.go:61] "coredns-6f6b679f8f-266t8" [e2d0f1c5-7959-4f05-a592-c427855eb2da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 18:04:05.060414    5904 system_pods.go:61] "etcd-functional-879800" [ca01f784-07cb-4ff1-b2f1-86794a6f2633] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 18:04:05.060414    5904 system_pods.go:61] "kube-apiserver-functional-879800" [275b24d2-eae4-4b12-bc75-53c7ca992b97] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 18:04:05.060414    5904 system_pods.go:61] "kube-controller-manager-functional-879800" [af44dd4d-9ff1-4e8f-8ae7-33b2daa153cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 18:04:05.060414    5904 system_pods.go:61] "kube-proxy-kpwfm" [2b755e57-fe35-4419-b5bc-697f67cc9cf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0910 18:04:05.060414    5904 system_pods.go:61] "kube-scheduler-functional-879800" [19fc695f-2881-45ac-afc0-1da162cc95f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 18:04:05.060414    5904 system_pods.go:61] "storage-provisioner" [180e0128-a2a6-4313-9a60-36e9992df753] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0910 18:04:05.060414    5904 system_pods.go:74] duration metric: took 13.1721ms to wait for pod list to return data ...
	I0910 18:04:05.060414    5904 node_conditions.go:102] verifying NodePressure condition ...
	I0910 18:04:05.060968    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes
	I0910 18:04:05.060968    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:05.060968    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:05.060968    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:05.064711    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:05.064775    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:05.064775    5904 round_trippers.go:580]     Audit-Id: b94b713d-31b7-46c4-8901-1abd2f28bef9
	I0910 18:04:05.064775    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:05.064775    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:05.064775    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:05.064775    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:05.064775    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:05 GMT
	I0910 18:04:05.065017    5904 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"553"},"items":[{"metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4832 chars]
	I0910 18:04:05.065785    5904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:04:05.065843    5904 node_conditions.go:123] node cpu capacity is 2
	I0910 18:04:05.065899    5904 node_conditions.go:105] duration metric: took 5.4851ms to run NodePressure ...
	I0910 18:04:05.065899    5904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:04:05.126720    5904 command_runner.go:130] ! W0910 18:04:05.352509    6663 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 18:04:05.126720    5904 command_runner.go:130] ! W0910 18:04:05.353444    6663 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 18:04:05.465424    5904 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0910 18:04:05.465974    5904 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0910 18:04:05.466040    5904 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0910 18:04:05.466247    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0910 18:04:05.466315    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:05.466315    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:05.466370    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:05.471490    5904 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:04:05.471490    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:05.471897    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:05 GMT
	I0910 18:04:05.471897    5904 round_trippers.go:580]     Audit-Id: 7e9d5709-6c28-4ded-88d6-09b80c6cdd91
	I0910 18:04:05.471897    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:05.471897    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:05.471897    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:05.471897    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:05.479351    5904 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"559"},"items":[{"metadata":{"name":"etcd-functional-879800","namespace":"kube-system","uid":"ca01f784-07cb-4ff1-b2f1-86794a6f2633","resourceVersion":"503","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.208.92:2379","kubernetes.io/config.hash":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.mirror":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.seen":"2024-09-10T18:01:18.626310006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 31259 chars]
	I0910 18:04:05.481476    5904 kubeadm.go:739] kubelet initialised
	I0910 18:04:05.481476    5904 kubeadm.go:740] duration metric: took 15.4355ms waiting for restarted kubelet to initialise ...
	I0910 18:04:05.481550    5904 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:04:05.481666    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods
	I0910 18:04:05.481718    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:05.481718    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:05.481718    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:05.485976    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:05.485976    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:05.485976    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:05.485976    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:05.485976    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:05.485976    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:05 GMT
	I0910 18:04:05.485976    5904 round_trippers.go:580]     Audit-Id: 3c63ea46-e9c7-4f5f-bd16-7f22c7c2cf94
	I0910 18:04:05.485976    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:05.491548    5904 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"559"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-266t8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"e2d0f1c5-7959-4f05-a592-c427855eb2da","resourceVersion":"556","creationTimestamp":"2024-09-10T18:01:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"e617221a-3d59-4054-85fe-856d80505706","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e617221a-3d59-4054-85fe-856d80505706\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52416 chars]
	I0910 18:04:05.493835    5904 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-266t8" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:05.493992    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-266t8
	I0910 18:04:05.494034    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:05.494034    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:05.494034    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:05.498601    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:05.498601    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:05.498601    5904 round_trippers.go:580]     Audit-Id: 4199ee95-34a3-4ff8-af91-5b3056d963d2
	I0910 18:04:05.498601    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:05.498601    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:05.498601    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:05.498601    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:05.498601    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:05 GMT
	I0910 18:04:05.498601    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-266t8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"e2d0f1c5-7959-4f05-a592-c427855eb2da","resourceVersion":"556","creationTimestamp":"2024-09-10T18:01:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"e617221a-3d59-4054-85fe-856d80505706","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e617221a-3d59-4054-85fe-856d80505706\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6935 chars]
	I0910 18:04:05.499709    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:05.499709    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:05.499709    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:05.499780    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:05.501959    5904 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:04:05.501959    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:05.501959    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:05.501959    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:05.501959    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:05.501959    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:05 GMT
	I0910 18:04:05.502364    5904 round_trippers.go:580]     Audit-Id: 47bf7d82-0563-4f60-8996-187e6b434b8e
	I0910 18:04:05.502364    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:05.502545    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:05.997873    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-266t8
	I0910 18:04:05.997873    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:05.997873    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:05.997873    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:06.001857    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:06.001857    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:06.001857    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:06.001857    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:06.001857    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:06.001857    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:06.001857    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:06 GMT
	I0910 18:04:06.001857    5904 round_trippers.go:580]     Audit-Id: bf375c92-22b3-40b5-a47d-a11010b48f1e
	I0910 18:04:06.002910    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-266t8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"e2d0f1c5-7959-4f05-a592-c427855eb2da","resourceVersion":"556","creationTimestamp":"2024-09-10T18:01:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"e617221a-3d59-4054-85fe-856d80505706","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e617221a-3d59-4054-85fe-856d80505706\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6935 chars]
	I0910 18:04:06.003892    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:06.003892    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:06.003892    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:06.003892    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:06.006841    5904 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:04:06.006841    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:06.006841    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:06.006841    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:06.006841    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:06.006841    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:06 GMT
	I0910 18:04:06.006841    5904 round_trippers.go:580]     Audit-Id: b121f1c1-cb07-4127-979c-6880149e9406
	I0910 18:04:06.006841    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:06.007922    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:06.501084    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-266t8
	I0910 18:04:06.501318    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:06.501318    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:06.501318    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:06.505414    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:06.505414    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:06.505414    5904 round_trippers.go:580]     Audit-Id: 100d55d5-ac3e-43e3-9c0b-cd35b4feb8c5
	I0910 18:04:06.505414    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:06.505414    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:06.505414    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:06.505414    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:06.505414    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:06 GMT
	I0910 18:04:06.505414    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-266t8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"e2d0f1c5-7959-4f05-a592-c427855eb2da","resourceVersion":"560","creationTimestamp":"2024-09-10T18:01:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"e617221a-3d59-4054-85fe-856d80505706","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e617221a-3d59-4054-85fe-856d80505706\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6706 chars]
	I0910 18:04:06.506910    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:06.506994    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:06.506994    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:06.506994    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:06.509807    5904 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:04:06.509807    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:06.509807    5904 round_trippers.go:580]     Audit-Id: 619db352-8f89-45a8-bb32-9054fd41c285
	I0910 18:04:06.509807    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:06.509807    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:06.509807    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:06.509807    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:06.509906    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:06 GMT
	I0910 18:04:06.510287    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:06.510957    5904 pod_ready.go:93] pod "coredns-6f6b679f8f-266t8" in "kube-system" namespace has status "Ready":"True"
	I0910 18:04:06.511020    5904 pod_ready.go:82] duration metric: took 1.0170613s for pod "coredns-6f6b679f8f-266t8" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:06.511077    5904 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-879800" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:06.511263    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/etcd-functional-879800
	I0910 18:04:06.511263    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:06.511263    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:06.511263    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:06.513869    5904 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:04:06.514513    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:06.514513    5904 round_trippers.go:580]     Audit-Id: 8e7da9b2-9099-4bd1-adea-4d0328b1b92f
	I0910 18:04:06.514513    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:06.514513    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:06.514513    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:06.514513    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:06.514513    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:06 GMT
	I0910 18:04:06.514736    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-879800","namespace":"kube-system","uid":"ca01f784-07cb-4ff1-b2f1-86794a6f2633","resourceVersion":"503","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.208.92:2379","kubernetes.io/config.hash":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.mirror":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.seen":"2024-09-10T18:01:18.626310006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6899 chars]
	I0910 18:04:06.515233    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:06.515233    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:06.515305    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:06.515305    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:06.537905    5904 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0910 18:04:06.537905    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:06.537905    5904 round_trippers.go:580]     Audit-Id: 40eaa053-3eff-4864-b97a-ce2ae51c20d5
	I0910 18:04:06.537905    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:06.537905    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:06.537905    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:06.537905    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:06.537905    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:06 GMT
	I0910 18:04:06.537905    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:07.017227    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/etcd-functional-879800
	I0910 18:04:07.017313    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:07.017373    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:07.017373    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:07.020644    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:07.020644    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:07.020644    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:07.020644    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:07.020644    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:07.020644    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:07.020644    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:07 GMT
	I0910 18:04:07.020644    5904 round_trippers.go:580]     Audit-Id: 272ee8fd-7b62-44aa-9c4c-c4b550ddb006
	I0910 18:04:07.020989    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-879800","namespace":"kube-system","uid":"ca01f784-07cb-4ff1-b2f1-86794a6f2633","resourceVersion":"503","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.208.92:2379","kubernetes.io/config.hash":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.mirror":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.seen":"2024-09-10T18:01:18.626310006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6899 chars]
	I0910 18:04:07.021988    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:07.022056    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:07.022119    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:07.022119    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:07.026823    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:07.026933    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:07.026933    5904 round_trippers.go:580]     Audit-Id: ea854720-4276-4e52-9fe4-f69c333a8bc7
	I0910 18:04:07.026933    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:07.026988    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:07.026988    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:07.026988    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:07.026988    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:07 GMT
	I0910 18:04:07.027433    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:07.518125    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/etcd-functional-879800
	I0910 18:04:07.518381    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:07.518381    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:07.518381    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:07.522196    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:07.522196    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:07.522273    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:07.522273    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:07 GMT
	I0910 18:04:07.522273    5904 round_trippers.go:580]     Audit-Id: 77a8781e-4589-4094-a434-ea3e17392669
	I0910 18:04:07.522273    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:07.522273    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:07.522273    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:07.522546    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-879800","namespace":"kube-system","uid":"ca01f784-07cb-4ff1-b2f1-86794a6f2633","resourceVersion":"503","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.208.92:2379","kubernetes.io/config.hash":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.mirror":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.seen":"2024-09-10T18:01:18.626310006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6899 chars]
	I0910 18:04:07.523417    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:07.523484    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:07.523484    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:07.523551    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:07.530133    5904 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:04:07.530133    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:07.530133    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:07.530133    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:07.530133    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:07 GMT
	I0910 18:04:07.530133    5904 round_trippers.go:580]     Audit-Id: deb96535-a8de-4f6b-ac31-6ccb648345c6
	I0910 18:04:07.530133    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:07.530133    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:07.530133    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:08.017255    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/etcd-functional-879800
	I0910 18:04:08.017255    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:08.017534    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:08.017534    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:08.021937    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:08.021937    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:08.021937    5904 round_trippers.go:580]     Audit-Id: f4dc6129-8673-41a1-8fac-c89ace1603d8
	I0910 18:04:08.021937    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:08.021937    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:08.021937    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:08.021937    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:08.021937    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:08 GMT
	I0910 18:04:08.022554    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-879800","namespace":"kube-system","uid":"ca01f784-07cb-4ff1-b2f1-86794a6f2633","resourceVersion":"503","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.208.92:2379","kubernetes.io/config.hash":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.mirror":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.seen":"2024-09-10T18:01:18.626310006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6899 chars]
	I0910 18:04:08.023453    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:08.023521    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:08.023521    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:08.023521    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:08.029419    5904 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:04:08.029419    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:08.029419    5904 round_trippers.go:580]     Audit-Id: 44276bea-5507-4754-a069-8b592be7b8cb
	I0910 18:04:08.029419    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:08.029419    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:08.029419    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:08.029419    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:08.029419    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:08 GMT
	I0910 18:04:08.029419    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:08.514816    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/etcd-functional-879800
	I0910 18:04:08.514887    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:08.514887    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:08.514887    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:08.519951    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:08.519951    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:08.519951    5904 round_trippers.go:580]     Audit-Id: 53e9e325-dfd0-48d8-ac6c-52a1e6335704
	I0910 18:04:08.519951    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:08.520050    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:08.520050    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:08.520050    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:08.520050    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:08 GMT
	I0910 18:04:08.520398    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-879800","namespace":"kube-system","uid":"ca01f784-07cb-4ff1-b2f1-86794a6f2633","resourceVersion":"503","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.208.92:2379","kubernetes.io/config.hash":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.mirror":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.seen":"2024-09-10T18:01:18.626310006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6899 chars]
	I0910 18:04:08.521583    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:08.521583    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:08.521583    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:08.521665    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:08.525390    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:08.525390    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:08.525390    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:08.525390    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:08.525390    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:08 GMT
	I0910 18:04:08.525390    5904 round_trippers.go:580]     Audit-Id: c3134823-8c61-474d-ac20-6c3762075be7
	I0910 18:04:08.525390    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:08.525390    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:08.526035    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:08.526350    5904 pod_ready.go:103] pod "etcd-functional-879800" in "kube-system" namespace has status "Ready":"False"
	I0910 18:04:09.025111    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/etcd-functional-879800
	I0910 18:04:09.025111    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:09.025219    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:09.025219    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:09.029650    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:09.030024    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:09.030024    5904 round_trippers.go:580]     Audit-Id: e8da05ba-c172-488c-b9b8-c7b585e81a0c
	I0910 18:04:09.030024    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:09.030024    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:09.030024    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:09.030024    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:09.030024    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:09 GMT
	I0910 18:04:09.030328    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-879800","namespace":"kube-system","uid":"ca01f784-07cb-4ff1-b2f1-86794a6f2633","resourceVersion":"503","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.208.92:2379","kubernetes.io/config.hash":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.mirror":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.seen":"2024-09-10T18:01:18.626310006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6899 chars]
	I0910 18:04:09.031356    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:09.031449    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:09.031449    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:09.031449    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:09.034795    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:09.034795    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:09.034795    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:09.034795    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:09.034795    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:09.034795    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:09.034795    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:09 GMT
	I0910 18:04:09.034795    5904 round_trippers.go:580]     Audit-Id: 4f06147d-7ae8-4d1a-a453-45a3c30f54d9
	I0910 18:04:09.035178    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:09.525657    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/etcd-functional-879800
	I0910 18:04:09.525657    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:09.525657    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:09.525657    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:09.529373    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:09.529373    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:09.529373    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:09 GMT
	I0910 18:04:09.529373    5904 round_trippers.go:580]     Audit-Id: 1a69820d-3b9d-4d19-a76e-a4aca23b9914
	I0910 18:04:09.529373    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:09.529373    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:09.529373    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:09.529959    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:09.530129    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-879800","namespace":"kube-system","uid":"ca01f784-07cb-4ff1-b2f1-86794a6f2633","resourceVersion":"503","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.208.92:2379","kubernetes.io/config.hash":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.mirror":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.seen":"2024-09-10T18:01:18.626310006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6899 chars]
	I0910 18:04:09.530968    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:09.531034    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:09.531034    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:09.531034    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:09.534742    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:09.534742    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:09.534742    5904 round_trippers.go:580]     Audit-Id: 6df7cfea-83e0-4cc4-9c93-1e12db52b79b
	I0910 18:04:09.534742    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:09.534742    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:09.534742    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:09.534953    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:09.534953    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:09 GMT
	I0910 18:04:09.535019    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:10.024761    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/etcd-functional-879800
	I0910 18:04:10.024853    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:10.024853    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:10.024853    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:10.038292    5904 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0910 18:04:10.038292    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:10.038292    5904 round_trippers.go:580]     Audit-Id: 05b5fdac-8cfd-4f64-b5ec-72631fef8c99
	I0910 18:04:10.038615    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:10.038615    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:10.038615    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:10.038615    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:10.038615    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:10 GMT
	I0910 18:04:10.038790    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-879800","namespace":"kube-system","uid":"ca01f784-07cb-4ff1-b2f1-86794a6f2633","resourceVersion":"503","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.208.92:2379","kubernetes.io/config.hash":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.mirror":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.seen":"2024-09-10T18:01:18.626310006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6899 chars]
	I0910 18:04:10.039350    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:10.039417    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:10.039417    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:10.039417    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:10.044242    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:10.044242    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:10.044242    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:10.044242    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:10.044242    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:10 GMT
	I0910 18:04:10.044242    5904 round_trippers.go:580]     Audit-Id: 6a7829ed-2768-4bff-83be-8f7afa7ef6b7
	I0910 18:04:10.044242    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:10.044242    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:10.044242    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:10.526379    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/etcd-functional-879800
	I0910 18:04:10.526472    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:10.526472    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:10.526561    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:10.530869    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:10.531429    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:10.531429    5904 round_trippers.go:580]     Audit-Id: bd62bfb8-2068-4d93-a6f7-2d5b5c54c4b5
	I0910 18:04:10.531429    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:10.531429    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:10.531429    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:10.531520    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:10.531520    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:10 GMT
	I0910 18:04:10.532275    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-879800","namespace":"kube-system","uid":"ca01f784-07cb-4ff1-b2f1-86794a6f2633","resourceVersion":"570","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.208.92:2379","kubernetes.io/config.hash":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.mirror":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.seen":"2024-09-10T18:01:18.626310006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6675 chars]
	I0910 18:04:10.532823    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:10.532823    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:10.532823    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:10.532823    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:10.535082    5904 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:04:10.535082    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:10.535082    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:10 GMT
	I0910 18:04:10.535082    5904 round_trippers.go:580]     Audit-Id: 75c22b77-f3a6-41bb-8f58-3d9a3f482f4f
	I0910 18:04:10.535082    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:10.535082    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:10.535659    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:10.535659    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:10.535959    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:10.536371    5904 pod_ready.go:93] pod "etcd-functional-879800" in "kube-system" namespace has status "Ready":"True"
	I0910 18:04:10.536371    5904 pod_ready.go:82] duration metric: took 4.0250211s for pod "etcd-functional-879800" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:10.536371    5904 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-879800" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:10.536499    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:10.536499    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:10.536499    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:10.536499    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:10.541641    5904 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:04:10.541641    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:10.541641    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:10 GMT
	I0910 18:04:10.541641    5904 round_trippers.go:580]     Audit-Id: 2b404641-b7b0-4c6d-b196-82d29f3a6db8
	I0910 18:04:10.541641    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:10.541641    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:10.541641    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:10.541641    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:10.542245    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"497","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0910 18:04:10.542342    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:10.542342    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:10.542873    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:10.542873    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:10.545104    5904 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:04:10.545104    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:10.545104    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:10.545104    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:10 GMT
	I0910 18:04:10.545104    5904 round_trippers.go:580]     Audit-Id: 415c2bcb-783f-4dd6-99b1-e5a997c2fd2c
	I0910 18:04:10.545104    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:10.545104    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:10.545104    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:10.545104    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:11.040951    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:11.041053    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:11.041053    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:11.041053    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:11.044863    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:11.044863    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:11.044863    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:11 GMT
	I0910 18:04:11.044863    5904 round_trippers.go:580]     Audit-Id: 0eaaf57a-6621-4a0a-a08c-f39457ea7182
	I0910 18:04:11.044863    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:11.044863    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:11.044863    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:11.044863    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:11.045151    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"497","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0910 18:04:11.046203    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:11.046288    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:11.046288    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:11.046288    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:11.052454    5904 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:04:11.052454    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:11.052454    5904 round_trippers.go:580]     Audit-Id: 5a8cc1b6-cbc7-4443-96b1-9ed341673ad0
	I0910 18:04:11.052454    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:11.052454    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:11.052454    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:11.052454    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:11.052454    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:11 GMT
	I0910 18:04:11.053123    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:11.540297    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:11.540414    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:11.540414    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:11.540414    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:11.544780    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:11.544780    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:11.544780    5904 round_trippers.go:580]     Audit-Id: de593579-290e-45fa-a89d-7030dc6fd262
	I0910 18:04:11.544780    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:11.544780    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:11.544780    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:11.544780    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:11.544780    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:11 GMT
	I0910 18:04:11.545367    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"497","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0910 18:04:11.546117    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:11.546197    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:11.546197    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:11.546197    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:11.552133    5904 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:04:11.552133    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:11.552133    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:11.552133    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:11.552133    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:11.552133    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:11 GMT
	I0910 18:04:11.552133    5904 round_trippers.go:580]     Audit-Id: 79a50656-d8a9-445d-a50e-5596deb086a5
	I0910 18:04:11.552133    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:11.552865    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:12.040283    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:12.040283    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:12.040283    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:12.040283    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:12.045002    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:12.045002    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:12.045002    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:12 GMT
	I0910 18:04:12.045002    5904 round_trippers.go:580]     Audit-Id: cfed15ec-a37f-4e6f-9cb9-911c4ab78a8a
	I0910 18:04:12.045002    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:12.045002    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:12.045002    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:12.045002    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:12.046283    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"497","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0910 18:04:12.047347    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:12.047347    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:12.047347    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:12.047347    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:12.049588    5904 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:04:12.050082    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:12.050082    5904 round_trippers.go:580]     Audit-Id: 4301d56b-3abd-4b7c-b6d8-fc3028064447
	I0910 18:04:12.050082    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:12.050082    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:12.050082    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:12.050082    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:12.050082    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:12 GMT
	I0910 18:04:12.050300    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:12.540251    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:12.540251    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:12.540251    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:12.540251    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:12.543897    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:12.543897    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:12.543897    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:12 GMT
	I0910 18:04:12.543897    5904 round_trippers.go:580]     Audit-Id: 642c99ec-fa06-4424-8dea-a035d946383b
	I0910 18:04:12.543897    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:12.543897    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:12.543897    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:12.543897    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:12.544429    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"497","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0910 18:04:12.545630    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:12.545714    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:12.545714    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:12.545714    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:12.551197    5904 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:04:12.551197    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:12.551197    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:12 GMT
	I0910 18:04:12.551197    5904 round_trippers.go:580]     Audit-Id: 4d70794f-195c-460b-a1cb-db2508969287
	I0910 18:04:12.551197    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:12.551197    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:12.551197    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:12.551197    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:12.551197    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:12.551197    5904 pod_ready.go:103] pod "kube-apiserver-functional-879800" in "kube-system" namespace has status "Ready":"False"
	I0910 18:04:13.042512    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:13.042512    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:13.042512    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:13.042512    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:13.048181    5904 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:04:13.048217    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:13.048286    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:13.048286    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:13.048286    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:13.048286    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:13.048329    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:13 GMT
	I0910 18:04:13.048329    5904 round_trippers.go:580]     Audit-Id: 5eda7805-253c-462e-b94a-df2b5d596ba8
	I0910 18:04:13.048329    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"497","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0910 18:04:13.049738    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:13.049738    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:13.049738    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:13.049738    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:13.052292    5904 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:04:13.052292    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:13.052292    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:13 GMT
	I0910 18:04:13.052292    5904 round_trippers.go:580]     Audit-Id: 2ca2c2ae-67c1-49c9-87a3-228524904b44
	I0910 18:04:13.052292    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:13.052292    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:13.052292    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:13.052292    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:13.052292    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:13.539970    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:13.539970    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:13.539970    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:13.539970    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:13.543741    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:13.544098    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:13.544167    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:13 GMT
	I0910 18:04:13.544167    5904 round_trippers.go:580]     Audit-Id: 0a40ee9c-d770-416c-9062-f7ce67528d28
	I0910 18:04:13.544167    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:13.544167    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:13.544167    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:13.544167    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:13.544458    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"497","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0910 18:04:13.545623    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:13.545623    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:13.545710    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:13.545710    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:13.547924    5904 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:04:13.547924    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:13.547924    5904 round_trippers.go:580]     Audit-Id: 0ef3dc17-b58f-4b02-957c-9830606f929c
	I0910 18:04:13.547924    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:13.547924    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:13.547924    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:13.547924    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:13.547924    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:13 GMT
	I0910 18:04:13.549087    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:14.040986    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:14.041372    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:14.041372    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:14.041465    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:14.045016    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:14.045979    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:14.045979    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:14 GMT
	I0910 18:04:14.045979    5904 round_trippers.go:580]     Audit-Id: 0b19be42-c5b4-46f5-bea1-9c2c123d979d
	I0910 18:04:14.045979    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:14.045979    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:14.046078    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:14.046078    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:14.046505    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"497","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0910 18:04:14.046766    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:14.046766    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:14.046766    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:14.046766    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:14.049854    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:14.049854    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:14.049854    5904 round_trippers.go:580]     Audit-Id: 0fa4cf33-cf37-4455-ba6a-27a8d1a816fc
	I0910 18:04:14.049854    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:14.049854    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:14.049854    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:14.049854    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:14.049854    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:14 GMT
	I0910 18:04:14.050433    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:14.544168    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:14.544168    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:14.544168    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:14.544168    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:14.550669    5904 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:04:14.550669    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:14.550669    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:14 GMT
	I0910 18:04:14.550669    5904 round_trippers.go:580]     Audit-Id: e04468d4-f742-46c6-887e-0e8ce5840f6d
	I0910 18:04:14.550669    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:14.550669    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:14.550669    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:14.550669    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:14.550669    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"497","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0910 18:04:14.552337    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:14.552337    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:14.552422    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:14.552422    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:14.554006    5904 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 18:04:14.554006    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:14.554006    5904 round_trippers.go:580]     Audit-Id: 98d7a130-cced-4fde-b984-fd09af1b9412
	I0910 18:04:14.554006    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:14.554006    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:14.554006    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:14.554006    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:14.554006    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:14 GMT
	I0910 18:04:14.555014    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:14.555014    5904 pod_ready.go:103] pod "kube-apiserver-functional-879800" in "kube-system" namespace has status "Ready":"False"
	I0910 18:04:15.042342    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:15.042342    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:15.042342    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:15.042342    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:15.044945    5904 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:04:15.044945    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:15.044945    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:15 GMT
	I0910 18:04:15.045780    5904 round_trippers.go:580]     Audit-Id: d55b8d5a-31d4-479c-a4db-1527dcfe2320
	I0910 18:04:15.045780    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:15.045848    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:15.045848    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:15.045848    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:15.045848    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"497","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0910 18:04:15.047349    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:15.047349    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:15.047349    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:15.047349    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:15.052997    5904 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:04:15.053615    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:15.053615    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:15.053615    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:15.053615    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:15 GMT
	I0910 18:04:15.053615    5904 round_trippers.go:580]     Audit-Id: d331aeea-f42c-4ea8-abbe-3689b7ec2ddb
	I0910 18:04:15.053615    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:15.053615    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:15.053615    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:15.542623    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:15.542623    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:15.542623    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:15.542623    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:15.546833    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:15.546833    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:15.546833    5904 round_trippers.go:580]     Audit-Id: f3ecb1db-2f87-4949-8bb4-2b6ab3038abc
	I0910 18:04:15.546833    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:15.546833    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:15.546833    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:15.546833    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:15.546833    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:15 GMT
	I0910 18:04:15.546833    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"497","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0910 18:04:15.547830    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:15.547918    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:15.547918    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:15.548007    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:15.550743    5904 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:04:15.550743    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:15.550743    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:15.550743    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:15.550743    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:15.550743    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:15 GMT
	I0910 18:04:15.550743    5904 round_trippers.go:580]     Audit-Id: 67e2f980-aa7e-4d6c-926e-18283c9c6e9c
	I0910 18:04:15.550743    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:15.551113    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:16.044093    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:16.044093    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:16.044093    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:16.044093    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:16.047641    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:16.047641    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:16.047641    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:16.047641    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:16.047641    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:16 GMT
	I0910 18:04:16.048601    5904 round_trippers.go:580]     Audit-Id: 1d19d5c1-f072-4232-9ad6-502fecb96bc9
	I0910 18:04:16.048601    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:16.048601    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:16.048960    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"497","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0910 18:04:16.050066    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:16.050131    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:16.050131    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:16.050189    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:16.054773    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:16.054773    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:16.054773    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:16.054773    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:16.054773    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:16.054773    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:16.054773    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:16 GMT
	I0910 18:04:16.054773    5904 round_trippers.go:580]     Audit-Id: 155c6b3e-7376-4aae-94c0-0548b17ff614
	I0910 18:04:16.055304    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:16.542103    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:16.542103    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:16.542192    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:16.542192    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:16.545596    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:16.545596    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:16.545596    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:16.545596    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:16.545596    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:16 GMT
	I0910 18:04:16.545596    5904 round_trippers.go:580]     Audit-Id: 3cf25a2b-037d-430e-b714-df3d75712804
	I0910 18:04:16.545596    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:16.545596    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:16.546859    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"497","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0910 18:04:16.547732    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:16.547818    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:16.547818    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:16.547818    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:16.553893    5904 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:04:16.553893    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:16.553893    5904 round_trippers.go:580]     Audit-Id: 4e4d9599-2533-4908-af8d-45350fe96e73
	I0910 18:04:16.553893    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:16.553893    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:16.553893    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:16.553893    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:16.553893    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:16 GMT
	I0910 18:04:16.553893    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:17.040230    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:17.040684    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:17.040684    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:17.040684    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:17.045852    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:17.045927    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:17.045927    5904 round_trippers.go:580]     Audit-Id: b81f2e75-68a3-4d6d-947d-5cad92110541
	I0910 18:04:17.045927    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:17.045927    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:17.045927    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:17.045927    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:17.046005    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:17 GMT
	I0910 18:04:17.046325    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"497","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0910 18:04:17.047345    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:17.047448    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:17.047448    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:17.047448    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:17.050651    5904 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:04:17.050766    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:17.050766    5904 round_trippers.go:580]     Audit-Id: 96bcb8c8-ac7e-4126-bb5f-0cb5d7883c8d
	I0910 18:04:17.050766    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:17.050766    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:17.050766    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:17.050766    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:17.050870    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:17 GMT
	I0910 18:04:17.051070    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:17.051685    5904 pod_ready.go:103] pod "kube-apiserver-functional-879800" in "kube-system" namespace has status "Ready":"False"
	I0910 18:04:17.541007    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:17.541077    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:17.541148    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:17.541148    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:17.544492    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:17.544492    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:17.544492    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:17.544492    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:17.544492    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:17 GMT
	I0910 18:04:17.544492    5904 round_trippers.go:580]     Audit-Id: 5129219d-bc64-406e-8af0-cd1cb958ab72
	I0910 18:04:17.544492    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:17.544981    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:17.545512    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"497","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0910 18:04:17.546386    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:17.546460    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:17.546511    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:17.546511    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:17.548456    5904 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 18:04:17.548456    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:17.548456    5904 round_trippers.go:580]     Audit-Id: a63b2241-22b2-46af-8198-642d276237a4
	I0910 18:04:17.548456    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:17.548456    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:17.548456    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:17.548456    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:17.548456    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:17 GMT
	I0910 18:04:17.549627    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:18.037536    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:18.037620    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:18.037620    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:18.037620    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:18.041115    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:18.041115    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:18.041115    5904 round_trippers.go:580]     Audit-Id: 9f913d41-231f-4dbb-bab3-2d9b4849f109
	I0910 18:04:18.041115    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:18.041115    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:18.041115    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:18.041115    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:18.041115    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:18 GMT
	I0910 18:04:18.041668    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"497","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0910 18:04:18.042635    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:18.042711    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:18.042711    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:18.042780    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:18.048662    5904 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:04:18.048662    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:18.048662    5904 round_trippers.go:580]     Audit-Id: 8bd88e17-06a6-4cd2-b3ba-9d522cf4bb08
	I0910 18:04:18.048662    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:18.048662    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:18.048662    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:18.048662    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:18.048662    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:18 GMT
	I0910 18:04:18.049370    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:18.541119    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:18.541119    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:18.541216    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:18.541216    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:18.546782    5904 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:04:18.546782    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:18.546782    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:18.546782    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:18.546782    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:18.547324    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:18.547324    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:18 GMT
	I0910 18:04:18.547324    5904 round_trippers.go:580]     Audit-Id: 7f3d01b6-cfad-4823-91a6-6fe98f9cbfcf
	I0910 18:04:18.547570    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"497","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0910 18:04:18.547570    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:18.547570    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:18.547570    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:18.547570    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:18.551116    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:18.551152    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:18.551152    5904 round_trippers.go:580]     Audit-Id: 80441340-9e44-4910-a3a6-46640f771e12
	I0910 18:04:18.551152    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:18.551152    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:18.551152    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:18.551152    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:18.551205    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:18 GMT
	I0910 18:04:18.551317    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:19.038113    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:19.038113    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:19.038113    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:19.038113    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:19.041699    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:19.042276    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:19.042276    5904 round_trippers.go:580]     Audit-Id: 2f8c50eb-d6b8-49fe-93a4-c28951f08fb6
	I0910 18:04:19.042276    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:19.042276    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:19.042276    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:19.042359    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:19.042359    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:19 GMT
	I0910 18:04:19.042931    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"497","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0910 18:04:19.044153    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:19.044234    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:19.044234    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:19.044234    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:19.047036    5904 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:04:19.047580    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:19.047645    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:19.047645    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:19 GMT
	I0910 18:04:19.047645    5904 round_trippers.go:580]     Audit-Id: cd5aa440-dcd2-4845-ad56-db5f3fe7f31c
	I0910 18:04:19.047645    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:19.047645    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:19.047645    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:19.047773    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:19.552698    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:19.552698    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:19.552698    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:19.552698    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:19.557244    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:19.557244    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:19.557244    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:19.557399    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:19.557399    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:19.557399    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:19 GMT
	I0910 18:04:19.557399    5904 round_trippers.go:580]     Audit-Id: 9447694e-bd19-42b1-b47b-bf6e02908419
	I0910 18:04:19.557399    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:19.557499    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"497","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0910 18:04:19.558269    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:19.558269    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:19.558269    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:19.558269    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:19.563769    5904 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:04:19.563769    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:19.563769    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:19.564308    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:19.564308    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:19 GMT
	I0910 18:04:19.564308    5904 round_trippers.go:580]     Audit-Id: 44293030-2b5b-476f-af25-186310d3def5
	I0910 18:04:19.564345    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:19.564345    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:19.564455    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:19.564455    5904 pod_ready.go:103] pod "kube-apiserver-functional-879800" in "kube-system" namespace has status "Ready":"False"
	I0910 18:04:20.038319    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:20.038319    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:20.038420    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:20.038420    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:20.042406    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:20.042406    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:20.042406    5904 round_trippers.go:580]     Audit-Id: 5f0186fd-bfcb-4765-947c-5e579a835862
	I0910 18:04:20.042406    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:20.042406    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:20.042498    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:20.042498    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:20.042498    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:20 GMT
	I0910 18:04:20.042685    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"497","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0910 18:04:20.043360    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:20.043360    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:20.043360    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:20.043360    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:20.048681    5904 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:04:20.048681    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:20.048681    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:20.048681    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:20.048681    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:20.048681    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:20.048681    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:20 GMT
	I0910 18:04:20.048681    5904 round_trippers.go:580]     Audit-Id: 7beeb1b5-242b-4908-aa4f-94ef5e2f182d
	I0910 18:04:20.048681    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:20.539230    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:20.539230    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:20.539230    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:20.539230    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:20.542841    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:20.543380    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:20.543380    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:20.543380    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:20.543490    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:20.543490    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:20 GMT
	I0910 18:04:20.543490    5904 round_trippers.go:580]     Audit-Id: d2323b36-f36c-49c1-b439-9e54809f5488
	I0910 18:04:20.543490    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:20.543742    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"575","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7904 chars]
	I0910 18:04:20.545016    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:20.545016    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:20.545016    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:20.545016    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:20.549337    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:20.549337    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:20.549337    5904 round_trippers.go:580]     Audit-Id: 207b9837-d37a-43f9-8dfa-491da9d062a6
	I0910 18:04:20.549337    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:20.549337    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:20.549472    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:20.549472    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:20.549472    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:20 GMT
	I0910 18:04:20.549814    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:20.550419    5904 pod_ready.go:93] pod "kube-apiserver-functional-879800" in "kube-system" namespace has status "Ready":"True"
	I0910 18:04:20.550419    5904 pod_ready.go:82] duration metric: took 10.0133083s for pod "kube-apiserver-functional-879800" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:20.550495    5904 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-functional-879800" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:20.550559    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-879800
	I0910 18:04:20.550559    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:20.550559    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:20.550559    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:20.553295    5904 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:04:20.553295    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:20.553295    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:20 GMT
	I0910 18:04:20.553295    5904 round_trippers.go:580]     Audit-Id: 0275b9f1-2550-4d93-9bbf-6895d2c5b773
	I0910 18:04:20.553295    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:20.553295    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:20.553295    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:20.553295    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:20.554118    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-879800","namespace":"kube-system","uid":"af44dd4d-9ff1-4e8f-8ae7-33b2daa153cd","resourceVersion":"569","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d6512c75bee84986cca75a3ed5603ade","kubernetes.io/config.mirror":"d6512c75bee84986cca75a3ed5603ade","kubernetes.io/config.seen":"2024-09-10T18:01:18.626315006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7472 chars]
	I0910 18:04:20.555371    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:20.555448    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:20.555494    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:20.555525    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:20.557717    5904 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:04:20.557717    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:20.557717    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:20 GMT
	I0910 18:04:20.557717    5904 round_trippers.go:580]     Audit-Id: 0c7e4396-f1ef-4100-ab06-b062686e931f
	I0910 18:04:20.557717    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:20.557717    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:20.557717    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:20.557717    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:20.557717    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:20.558926    5904 pod_ready.go:93] pod "kube-controller-manager-functional-879800" in "kube-system" namespace has status "Ready":"True"
	I0910 18:04:20.559003    5904 pod_ready.go:82] duration metric: took 8.4693ms for pod "kube-controller-manager-functional-879800" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:20.559003    5904 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kpwfm" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:20.559255    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-proxy-kpwfm
	I0910 18:04:20.559295    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:20.559364    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:20.559392    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:20.561122    5904 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 18:04:20.561122    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:20.561122    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:20.561122    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:20.561122    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:20.561122    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:20.561122    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:20 GMT
	I0910 18:04:20.561122    5904 round_trippers.go:580]     Audit-Id: 57f8b82d-d705-448c-9e3e-91f9d7341e81
	I0910 18:04:20.562135    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kpwfm","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b755e57-fe35-4419-b5bc-697f67cc9cf8","resourceVersion":"558","creationTimestamp":"2024-09-10T18:01:23Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ee39e9f4-e5d3-4f78-ad1b-1ae33861750c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ee39e9f4-e5d3-4f78-ad1b-1ae33861750c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6401 chars]
	I0910 18:04:20.562478    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:20.562478    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:20.562478    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:20.562478    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:20.565131    5904 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:04:20.565612    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:20.565612    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:20.565612    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:20 GMT
	I0910 18:04:20.565612    5904 round_trippers.go:580]     Audit-Id: bfbc91b8-9ba3-49d5-b5c6-96789377ff08
	I0910 18:04:20.565612    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:20.565612    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:20.565612    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:20.565864    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:20.566183    5904 pod_ready.go:93] pod "kube-proxy-kpwfm" in "kube-system" namespace has status "Ready":"True"
	I0910 18:04:20.566252    5904 pod_ready.go:82] duration metric: took 7.2486ms for pod "kube-proxy-kpwfm" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:20.566252    5904 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-879800" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:20.566328    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-879800
	I0910 18:04:20.566397    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:20.566397    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:20.566397    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:20.568585    5904 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:04:20.568585    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:20.568585    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:20.568585    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:20.568585    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:20.568585    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:20 GMT
	I0910 18:04:20.568585    5904 round_trippers.go:580]     Audit-Id: ec7befb4-9c50-4e88-9156-e35dcfd7227a
	I0910 18:04:20.568585    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:20.569236    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-879800","namespace":"kube-system","uid":"19fc695f-2881-45ac-afc0-1da162cc95f2","resourceVersion":"568","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"157d97d5dc65cf9cec201f4caa041805","kubernetes.io/config.mirror":"157d97d5dc65cf9cec201f4caa041805","kubernetes.io/config.seen":"2024-09-10T18:01:18.626316206Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5202 chars]
	I0910 18:04:20.569432    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:20.569432    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:20.569432    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:20.569432    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:20.572006    5904 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:04:20.572006    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:20.572006    5904 round_trippers.go:580]     Audit-Id: 44dc9f05-9795-4a15-9aa0-eb85414cf661
	I0910 18:04:20.572006    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:20.572006    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:20.572006    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:20.572006    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:20.572006    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:20 GMT
	I0910 18:04:20.572364    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:20.572364    5904 pod_ready.go:93] pod "kube-scheduler-functional-879800" in "kube-system" namespace has status "Ready":"True"
	I0910 18:04:20.572364    5904 pod_ready.go:82] duration metric: took 6.1118ms for pod "kube-scheduler-functional-879800" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:20.572364    5904 pod_ready.go:39] duration metric: took 15.0897906s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:04:20.573091    5904 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 18:04:20.590762    5904 command_runner.go:130] > -16
	I0910 18:04:20.590857    5904 ops.go:34] apiserver oom_adj: -16
	I0910 18:04:20.590857    5904 kubeadm.go:597] duration metric: took 47.2792454s to restartPrimaryControlPlane
	I0910 18:04:20.590857    5904 kubeadm.go:394] duration metric: took 47.3714622s to StartCluster
	I0910 18:04:20.590857    5904 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:04:20.590999    5904 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:04:20.592168    5904 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:04:20.593292    5904 start.go:235] Will wait 6m0s for node &{Name: IP:172.31.208.92 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 18:04:20.593292    5904 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 18:04:20.593292    5904 addons.go:69] Setting storage-provisioner=true in profile "functional-879800"
	I0910 18:04:20.593292    5904 addons.go:234] Setting addon storage-provisioner=true in "functional-879800"
	W0910 18:04:20.593292    5904 addons.go:243] addon storage-provisioner should already be in state true
	I0910 18:04:20.593292    5904 addons.go:69] Setting default-storageclass=true in profile "functional-879800"
	I0910 18:04:20.593292    5904 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:04:20.593292    5904 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-879800"
	I0910 18:04:20.593835    5904 host.go:66] Checking if "functional-879800" exists ...
	I0910 18:04:20.594666    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:04:20.595804    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:04:20.596232    5904 out.go:177] * Verifying Kubernetes components...
	I0910 18:04:20.609655    5904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:04:20.873543    5904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:04:20.899124    5904 node_ready.go:35] waiting up to 6m0s for node "functional-879800" to be "Ready" ...
	I0910 18:04:20.899124    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:20.899124    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:20.899124    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:20.899124    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:20.902819    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:20.902945    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:20.902945    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:20.902945    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:20.902945    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:21 GMT
	I0910 18:04:20.902945    5904 round_trippers.go:580]     Audit-Id: 80f57295-08eb-4eb6-8280-c9e1c09f5c57
	I0910 18:04:20.902945    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:20.902945    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:20.902945    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:20.903851    5904 node_ready.go:49] node "functional-879800" has status "Ready":"True"
	I0910 18:04:20.903851    5904 node_ready.go:38] duration metric: took 4.7272ms for node "functional-879800" to be "Ready" ...
	I0910 18:04:20.903851    5904 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:04:20.903928    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods
	I0910 18:04:20.904014    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:20.904057    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:20.904057    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:20.910283    5904 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:04:20.910283    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:20.910283    5904 round_trippers.go:580]     Audit-Id: 70200d11-b944-417d-a692-771bd4f69f63
	I0910 18:04:20.910283    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:20.910283    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:20.910283    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:20.910283    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:20.910283    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:21 GMT
	I0910 18:04:20.912289    5904 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"575"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-266t8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"e2d0f1c5-7959-4f05-a592-c427855eb2da","resourceVersion":"560","creationTimestamp":"2024-09-10T18:01:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"e617221a-3d59-4054-85fe-856d80505706","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e617221a-3d59-4054-85fe-856d80505706\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51213 chars]
	I0910 18:04:20.917270    5904 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-266t8" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:20.917501    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-266t8
	I0910 18:04:20.917501    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:20.917544    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:20.917544    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:20.920741    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:20.920741    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:20.920741    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:20.920741    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:20.920741    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:21 GMT
	I0910 18:04:20.920741    5904 round_trippers.go:580]     Audit-Id: 842fdb24-bcc7-42e8-a40d-1f3487085267
	I0910 18:04:20.920741    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:20.920741    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:20.920741    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-266t8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"e2d0f1c5-7959-4f05-a592-c427855eb2da","resourceVersion":"560","creationTimestamp":"2024-09-10T18:01:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"e617221a-3d59-4054-85fe-856d80505706","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e617221a-3d59-4054-85fe-856d80505706\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6706 chars]
	I0910 18:04:20.949825    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:20.949825    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:20.949825    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:20.949825    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:20.953743    5904 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:04:20.953936    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:20.953936    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:20.953936    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:20.953936    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:20.954118    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:20.954118    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:21 GMT
	I0910 18:04:20.954216    5904 round_trippers.go:580]     Audit-Id: aae1dc3e-1787-4169-82c3-6454af527a86
	I0910 18:04:20.954276    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:20.954963    5904 pod_ready.go:93] pod "coredns-6f6b679f8f-266t8" in "kube-system" namespace has status "Ready":"True"
	I0910 18:04:20.954963    5904 pod_ready.go:82] duration metric: took 37.6908ms for pod "coredns-6f6b679f8f-266t8" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:20.954963    5904 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-879800" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:21.140632    5904 request.go:632] Waited for 185.6558ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/etcd-functional-879800
	I0910 18:04:21.140632    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/etcd-functional-879800
	I0910 18:04:21.140632    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:21.140632    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:21.140632    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:21.145228    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:21.145329    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:21.145329    5904 round_trippers.go:580]     Audit-Id: 41e463c0-59e4-4280-bc4b-07e31d6da9bc
	I0910 18:04:21.145399    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:21.145399    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:21.145399    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:21.145399    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:21.145437    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:21 GMT
	I0910 18:04:21.145437    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-879800","namespace":"kube-system","uid":"ca01f784-07cb-4ff1-b2f1-86794a6f2633","resourceVersion":"570","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.208.92:2379","kubernetes.io/config.hash":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.mirror":"0d90ceb92018b92cb146f2e5e2e41150","kubernetes.io/config.seen":"2024-09-10T18:01:18.626310006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6675 chars]
	I0910 18:04:21.347710    5904 request.go:632] Waited for 201.4994ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:21.347849    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:21.347849    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:21.347849    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:21.347849    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:21.351923    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:21.351923    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:21.351923    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:21.351923    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:21.351923    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:21.351923    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:21.351923    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:21 GMT
	I0910 18:04:21.351923    5904 round_trippers.go:580]     Audit-Id: aff808ac-5639-492f-a9d6-58438a4d6ceb
	I0910 18:04:21.352450    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:21.352450    5904 pod_ready.go:93] pod "etcd-functional-879800" in "kube-system" namespace has status "Ready":"True"
	I0910 18:04:21.352450    5904 pod_ready.go:82] duration metric: took 397.4599ms for pod "etcd-functional-879800" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:21.352450    5904 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-879800" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:21.554985    5904 request.go:632] Waited for 201.5539ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:21.555098    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800
	I0910 18:04:21.555098    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:21.555098    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:21.555098    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:21.559167    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:21.559167    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:21.559167    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:21.559167    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:21.559167    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:21 GMT
	I0910 18:04:21.559167    5904 round_trippers.go:580]     Audit-Id: 7856c3e0-7b3e-4ada-83e5-cd66842e215d
	I0910 18:04:21.559167    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:21.559167    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:21.560220    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-879800","namespace":"kube-system","uid":"275b24d2-eae4-4b12-bc75-53c7ca992b97","resourceVersion":"575","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.208.92:8441","kubernetes.io/config.hash":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.mirror":"a52b68ea1609e6e9c45e25b43c1638c7","kubernetes.io/config.seen":"2024-09-10T18:01:18.626313706Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7904 chars]
	I0910 18:04:21.744500    5904 request.go:632] Waited for 183.529ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:21.744837    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:21.744837    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:21.744837    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:21.744837    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:21.748051    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:21.748531    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:21.748609    5904 round_trippers.go:580]     Audit-Id: e2ac3635-2a76-4aff-8721-d371f54c0233
	I0910 18:04:21.748609    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:21.748609    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:21.748609    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:21.748609    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:21.748673    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:21 GMT
	I0910 18:04:21.748967    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:21.749371    5904 pod_ready.go:93] pod "kube-apiserver-functional-879800" in "kube-system" namespace has status "Ready":"True"
	I0910 18:04:21.749371    5904 pod_ready.go:82] duration metric: took 396.8937ms for pod "kube-apiserver-functional-879800" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:21.749472    5904 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-879800" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:21.949941    5904 request.go:632] Waited for 200.183ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-879800
	I0910 18:04:21.950040    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-879800
	I0910 18:04:21.950040    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:21.950040    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:21.950040    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:21.953358    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:21.953358    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:21.953358    5904 round_trippers.go:580]     Audit-Id: 4704af06-99e2-4d9e-ad94-ac5dd0e9586d
	I0910 18:04:21.953358    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:21.953358    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:21.953358    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:21.953358    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:21.953358    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:22 GMT
	I0910 18:04:21.954733    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-879800","namespace":"kube-system","uid":"af44dd4d-9ff1-4e8f-8ae7-33b2daa153cd","resourceVersion":"569","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d6512c75bee84986cca75a3ed5603ade","kubernetes.io/config.mirror":"d6512c75bee84986cca75a3ed5603ade","kubernetes.io/config.seen":"2024-09-10T18:01:18.626315006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7472 chars]
	I0910 18:04:22.140443    5904 request.go:632] Waited for 185.461ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:22.140443    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:22.140443    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:22.140443    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:22.140443    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:22.144794    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:22.144862    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:22.144862    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:22 GMT
	I0910 18:04:22.144862    5904 round_trippers.go:580]     Audit-Id: 677e7d5b-bacd-4abe-a20a-fcf1dc31cbd1
	I0910 18:04:22.144862    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:22.144862    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:22.144862    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:22.144862    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:22.144862    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:22.145570    5904 pod_ready.go:93] pod "kube-controller-manager-functional-879800" in "kube-system" namespace has status "Ready":"True"
	I0910 18:04:22.145570    5904 pod_ready.go:82] duration metric: took 396.071ms for pod "kube-controller-manager-functional-879800" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:22.145570    5904 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kpwfm" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:22.346551    5904 request.go:632] Waited for 200.9677ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-proxy-kpwfm
	I0910 18:04:22.346551    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-proxy-kpwfm
	I0910 18:04:22.346551    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:22.346551    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:22.346551    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:22.350846    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:22.350846    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:22.350846    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:22.350846    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:22 GMT
	I0910 18:04:22.350846    5904 round_trippers.go:580]     Audit-Id: ff146b98-f1c4-4e9e-a610-6de7f04aaf58
	I0910 18:04:22.350846    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:22.350846    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:22.350846    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:22.351385    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kpwfm","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b755e57-fe35-4419-b5bc-697f67cc9cf8","resourceVersion":"558","creationTimestamp":"2024-09-10T18:01:23Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ee39e9f4-e5d3-4f78-ad1b-1ae33861750c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ee39e9f4-e5d3-4f78-ad1b-1ae33861750c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6401 chars]
	I0910 18:04:22.540261    5904 request.go:632] Waited for 188.6619ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:22.540261    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:22.540261    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:22.540261    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:22.540261    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:22.542859    5904 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:04:22.543899    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:22.543899    5904 round_trippers.go:580]     Audit-Id: 858a4843-a9b1-4ff9-b872-5bfd88d86808
	I0910 18:04:22.543899    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:22.543899    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:22.543899    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:22.543899    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:22.543899    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:22 GMT
	I0910 18:04:22.544136    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:22.544208    5904 pod_ready.go:93] pod "kube-proxy-kpwfm" in "kube-system" namespace has status "Ready":"True"
	I0910 18:04:22.544208    5904 pod_ready.go:82] duration metric: took 398.6112ms for pod "kube-proxy-kpwfm" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:22.544208    5904 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-879800" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:22.549200    5904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:04:22.549200    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:04:22.549200    5904 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:04:22.549200    5904 kapi.go:59] client config for functional-879800: &rest.Config{Host:"https://172.31.208.92:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 18:04:22.549200    5904 addons.go:234] Setting addon default-storageclass=true in "functional-879800"
	W0910 18:04:22.549200    5904 addons.go:243] addon default-storageclass should already be in state true
	I0910 18:04:22.549200    5904 host.go:66] Checking if "functional-879800" exists ...
	I0910 18:04:22.549200    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:04:22.567600    5904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:04:22.567600    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:04:22.569829    5904 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:04:22.572770    5904 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 18:04:22.572770    5904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 18:04:22.572770    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:04:22.744955    5904 request.go:632] Waited for 200.7328ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-879800
	I0910 18:04:22.744955    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-879800
	I0910 18:04:22.744955    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:22.744955    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:22.744955    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:22.748139    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:22.748139    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:22.748139    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:22.748139    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:22 GMT
	I0910 18:04:22.748139    5904 round_trippers.go:580]     Audit-Id: cd822167-4dc9-4c9d-890a-c9c7dfc4690a
	I0910 18:04:22.748139    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:22.748139    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:22.748139    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:22.749135    5904 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-879800","namespace":"kube-system","uid":"19fc695f-2881-45ac-afc0-1da162cc95f2","resourceVersion":"568","creationTimestamp":"2024-09-10T18:01:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"157d97d5dc65cf9cec201f4caa041805","kubernetes.io/config.mirror":"157d97d5dc65cf9cec201f4caa041805","kubernetes.io/config.seen":"2024-09-10T18:01:18.626316206Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5202 chars]
	I0910 18:04:22.939971    5904 request.go:632] Waited for 190.4653ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:22.939971    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes/functional-879800
	I0910 18:04:22.939971    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:22.939971    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:22.939971    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:22.944665    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:22.944665    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:22.944665    5904 round_trippers.go:580]     Audit-Id: 2ada6157-f93c-4e05-b925-025ec97986a9
	I0910 18:04:22.944758    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:22.944758    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:22.944758    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:22.944758    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:22.944758    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:23 GMT
	I0910 18:04:22.944758    5904 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-10T18:01:15Z","fieldsType":"FieldsV1", [truncated 4779 chars]
	I0910 18:04:22.945592    5904 pod_ready.go:93] pod "kube-scheduler-functional-879800" in "kube-system" namespace has status "Ready":"True"
	I0910 18:04:22.945592    5904 pod_ready.go:82] duration metric: took 401.357ms for pod "kube-scheduler-functional-879800" in "kube-system" namespace to be "Ready" ...
	I0910 18:04:22.945592    5904 pod_ready.go:39] duration metric: took 2.0416027s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:04:22.945592    5904 api_server.go:52] waiting for apiserver process to appear ...
	I0910 18:04:22.959519    5904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:04:22.992662    5904 command_runner.go:130] > 6318
	I0910 18:04:22.993008    5904 api_server.go:72] duration metric: took 2.3995532s to wait for apiserver process to appear ...
	I0910 18:04:22.993099    5904 api_server.go:88] waiting for apiserver healthz status ...
	I0910 18:04:22.993099    5904 api_server.go:253] Checking apiserver healthz at https://172.31.208.92:8441/healthz ...
	I0910 18:04:23.003367    5904 api_server.go:279] https://172.31.208.92:8441/healthz returned 200:
	ok
	I0910 18:04:23.003806    5904 round_trippers.go:463] GET https://172.31.208.92:8441/version
	I0910 18:04:23.003806    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:23.003863    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:23.003863    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:23.005451    5904 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 18:04:23.005530    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:23.005742    5904 round_trippers.go:580]     Content-Length: 263
	I0910 18:04:23.005814    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:23 GMT
	I0910 18:04:23.005814    5904 round_trippers.go:580]     Audit-Id: 142cbfde-7b48-4eee-9753-3240cf2c6f5d
	I0910 18:04:23.005814    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:23.005814    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:23.005814    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:23.005814    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:23.005814    5904 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.0",
	  "gitCommit": "9edcffcde5595e8a5b1a35f88c421764e575afce",
	  "gitTreeState": "clean",
	  "buildDate": "2024-08-13T07:28:49Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0910 18:04:23.005814    5904 api_server.go:141] control plane version: v1.31.0
	I0910 18:04:23.005814    5904 api_server.go:131] duration metric: took 12.7136ms to wait for apiserver health ...
	I0910 18:04:23.005814    5904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 18:04:23.143326    5904 request.go:632] Waited for 137.3809ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods
	I0910 18:04:23.143548    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods
	I0910 18:04:23.143548    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:23.143548    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:23.143548    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:23.149166    5904 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:04:23.149396    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:23.149396    5904 round_trippers.go:580]     Audit-Id: 2ff4bd0a-e05f-4d55-92f4-7e8156b049c1
	I0910 18:04:23.149396    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:23.149493    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:23.149520    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:23.149520    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:23.149520    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:23 GMT
	I0910 18:04:23.150293    5904 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"577"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-266t8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"e2d0f1c5-7959-4f05-a592-c427855eb2da","resourceVersion":"560","creationTimestamp":"2024-09-10T18:01:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"e617221a-3d59-4054-85fe-856d80505706","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e617221a-3d59-4054-85fe-856d80505706\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51213 chars]
	I0910 18:04:23.153447    5904 system_pods.go:59] 7 kube-system pods found
	I0910 18:04:23.153555    5904 system_pods.go:61] "coredns-6f6b679f8f-266t8" [e2d0f1c5-7959-4f05-a592-c427855eb2da] Running
	I0910 18:04:23.153555    5904 system_pods.go:61] "etcd-functional-879800" [ca01f784-07cb-4ff1-b2f1-86794a6f2633] Running
	I0910 18:04:23.153555    5904 system_pods.go:61] "kube-apiserver-functional-879800" [275b24d2-eae4-4b12-bc75-53c7ca992b97] Running
	I0910 18:04:23.153555    5904 system_pods.go:61] "kube-controller-manager-functional-879800" [af44dd4d-9ff1-4e8f-8ae7-33b2daa153cd] Running
	I0910 18:04:23.153555    5904 system_pods.go:61] "kube-proxy-kpwfm" [2b755e57-fe35-4419-b5bc-697f67cc9cf8] Running
	I0910 18:04:23.153555    5904 system_pods.go:61] "kube-scheduler-functional-879800" [19fc695f-2881-45ac-afc0-1da162cc95f2] Running
	I0910 18:04:23.153663    5904 system_pods.go:61] "storage-provisioner" [180e0128-a2a6-4313-9a60-36e9992df753] Running
	I0910 18:04:23.153663    5904 system_pods.go:74] duration metric: took 147.839ms to wait for pod list to return data ...
	I0910 18:04:23.153663    5904 default_sa.go:34] waiting for default service account to be created ...
	I0910 18:04:23.349954    5904 request.go:632] Waited for 195.8008ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.208.92:8441/api/v1/namespaces/default/serviceaccounts
	I0910 18:04:23.350021    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/default/serviceaccounts
	I0910 18:04:23.350021    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:23.350021    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:23.350021    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:23.354844    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:23.354909    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:23.354909    5904 round_trippers.go:580]     Audit-Id: 6d709b7d-48e0-4296-8c20-23d62669dad1
	I0910 18:04:23.354909    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:23.354909    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:23.354909    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:23.354909    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:23.354909    5904 round_trippers.go:580]     Content-Length: 261
	I0910 18:04:23.354909    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:23 GMT
	I0910 18:04:23.354909    5904 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"577"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b4045010-5fb0-4563-8f9b-55295c451d47","resourceVersion":"298","creationTimestamp":"2024-09-10T18:01:23Z"}}]}
	I0910 18:04:23.354909    5904 default_sa.go:45] found service account: "default"
	I0910 18:04:23.354909    5904 default_sa.go:55] duration metric: took 201.2328ms for default service account to be created ...
	I0910 18:04:23.354909    5904 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 18:04:23.540150    5904 request.go:632] Waited for 184.6158ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods
	I0910 18:04:23.540519    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods
	I0910 18:04:23.540519    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:23.540519    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:23.540519    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:23.544975    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:23.544975    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:23.544975    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:23.544975    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:23.544975    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:23.544975    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:23.544975    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:23 GMT
	I0910 18:04:23.544975    5904 round_trippers.go:580]     Audit-Id: 3b7bdf13-0879-40f8-bf43-a4632026fd82
	I0910 18:04:23.545690    5904 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"578"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-266t8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"e2d0f1c5-7959-4f05-a592-c427855eb2da","resourceVersion":"560","creationTimestamp":"2024-09-10T18:01:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"e617221a-3d59-4054-85fe-856d80505706","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T18:01:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e617221a-3d59-4054-85fe-856d80505706\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51213 chars]
	I0910 18:04:23.547922    5904 system_pods.go:86] 7 kube-system pods found
	I0910 18:04:23.548013    5904 system_pods.go:89] "coredns-6f6b679f8f-266t8" [e2d0f1c5-7959-4f05-a592-c427855eb2da] Running
	I0910 18:04:23.548013    5904 system_pods.go:89] "etcd-functional-879800" [ca01f784-07cb-4ff1-b2f1-86794a6f2633] Running
	I0910 18:04:23.548013    5904 system_pods.go:89] "kube-apiserver-functional-879800" [275b24d2-eae4-4b12-bc75-53c7ca992b97] Running
	I0910 18:04:23.548013    5904 system_pods.go:89] "kube-controller-manager-functional-879800" [af44dd4d-9ff1-4e8f-8ae7-33b2daa153cd] Running
	I0910 18:04:23.548013    5904 system_pods.go:89] "kube-proxy-kpwfm" [2b755e57-fe35-4419-b5bc-697f67cc9cf8] Running
	I0910 18:04:23.548013    5904 system_pods.go:89] "kube-scheduler-functional-879800" [19fc695f-2881-45ac-afc0-1da162cc95f2] Running
	I0910 18:04:23.548013    5904 system_pods.go:89] "storage-provisioner" [180e0128-a2a6-4313-9a60-36e9992df753] Running
	I0910 18:04:23.548013    5904 system_pods.go:126] duration metric: took 193.0905ms to wait for k8s-apps to be running ...
	I0910 18:04:23.548013    5904 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 18:04:23.556486    5904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:04:23.578907    5904 system_svc.go:56] duration metric: took 30.8918ms WaitForService to wait for kubelet
	I0910 18:04:23.578907    5904 kubeadm.go:582] duration metric: took 2.9854118s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:04:23.578907    5904 node_conditions.go:102] verifying NodePressure condition ...
	I0910 18:04:23.744388    5904 request.go:632] Waited for 164.4485ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.208.92:8441/api/v1/nodes
	I0910 18:04:23.744558    5904 round_trippers.go:463] GET https://172.31.208.92:8441/api/v1/nodes
	I0910 18:04:23.744558    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:23.744558    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:23.744558    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:23.748677    5904 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:04:23.748811    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:23.748811    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:23.748811    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:23.748811    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:23.748811    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:23.748811    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:23 GMT
	I0910 18:04:23.748811    5904 round_trippers.go:580]     Audit-Id: 6c19588b-ca3f-4147-b7e4-566288b734ab
	I0910 18:04:23.749015    5904 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"579"},"items":[{"metadata":{"name":"functional-879800","uid":"deb0e7d9-b7e7-4973-b365-98b0476bc697","resourceVersion":"496","creationTimestamp":"2024-09-10T18:01:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-879800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"functional-879800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T18_01_19_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4832 chars]
	I0910 18:04:23.749015    5904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:04:23.749015    5904 node_conditions.go:123] node cpu capacity is 2
	I0910 18:04:23.749015    5904 node_conditions.go:105] duration metric: took 170.0967ms to run NodePressure ...
	I0910 18:04:23.749015    5904 start.go:241] waiting for startup goroutines ...
	I0910 18:04:24.548438    5904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:04:24.549324    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:04:24.549383    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:04:24.549383    5904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:04:24.549383    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:04:24.549383    5904 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 18:04:24.549383    5904 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 18:04:24.549383    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:04:26.525663    5904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:04:26.525758    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:04:26.525758    5904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:04:26.875832    5904 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:04:26.875832    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:04:26.875832    5904 sshutil.go:53] new ssh client: &{IP:172.31.208.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-879800\id_rsa Username:docker}
	I0910 18:04:27.015014    5904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 18:04:27.793174    5904 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0910 18:04:27.793174    5904 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0910 18:04:27.793174    5904 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0910 18:04:27.793174    5904 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0910 18:04:27.793174    5904 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0910 18:04:27.793174    5904 command_runner.go:130] > pod/storage-provisioner configured
	I0910 18:04:28.857810    5904 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:04:28.858005    5904 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:04:28.858322    5904 sshutil.go:53] new ssh client: &{IP:172.31.208.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-879800\id_rsa Username:docker}
	I0910 18:04:28.986001    5904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 18:04:29.128103    5904 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0910 18:04:29.128554    5904 round_trippers.go:463] GET https://172.31.208.92:8441/apis/storage.k8s.io/v1/storageclasses
	I0910 18:04:29.128554    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:29.128554    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:29.128554    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:29.132144    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:29.132144    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:29.132144    5904 round_trippers.go:580]     Audit-Id: 89074cf9-535a-4b2e-bbe1-68a77b31f3de
	I0910 18:04:29.132255    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:29.132255    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:29.132255    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:29.132255    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:29.132255    5904 round_trippers.go:580]     Content-Length: 1273
	I0910 18:04:29.132255    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:29 GMT
	I0910 18:04:29.132329    5904 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"583"},"items":[{"metadata":{"name":"standard","uid":"eae7ca6d-9925-47ef-b637-8f0a18955bc1","resourceVersion":"395","creationTimestamp":"2024-09-10T18:01:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-10T18:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0910 18:04:29.132960    5904 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"eae7ca6d-9925-47ef-b637-8f0a18955bc1","resourceVersion":"395","creationTimestamp":"2024-09-10T18:01:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-10T18:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0910 18:04:29.132960    5904 round_trippers.go:463] PUT https://172.31.208.92:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0910 18:04:29.132960    5904 round_trippers.go:469] Request Headers:
	I0910 18:04:29.132960    5904 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:04:29.132960    5904 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:04:29.132960    5904 round_trippers.go:473]     Content-Type: application/json
	I0910 18:04:29.137473    5904 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:04:29.137473    5904 round_trippers.go:577] Response Headers:
	I0910 18:04:29.137473    5904 round_trippers.go:580]     Audit-Id: 4c2c799c-afdf-4623-8d01-225ead271539
	I0910 18:04:29.137473    5904 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 18:04:29.137473    5904 round_trippers.go:580]     Content-Type: application/json
	I0910 18:04:29.137473    5904 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 24785bde-f1c1-4b2f-be65-e31242463307
	I0910 18:04:29.137473    5904 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5190d3d5-6ba0-4ad0-b558-a08715d607c7
	I0910 18:04:29.137473    5904 round_trippers.go:580]     Content-Length: 1220
	I0910 18:04:29.137473    5904 round_trippers.go:580]     Date: Tue, 10 Sep 2024 18:04:29 GMT
	I0910 18:04:29.137473    5904 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"eae7ca6d-9925-47ef-b637-8f0a18955bc1","resourceVersion":"395","creationTimestamp":"2024-09-10T18:01:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-10T18:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0910 18:04:29.141406    5904 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0910 18:04:29.143794    5904 addons.go:510] duration metric: took 8.5499223s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0910 18:04:29.143873    5904 start.go:246] waiting for cluster config update ...
	I0910 18:04:29.143873    5904 start.go:255] writing updated cluster config ...
	I0910 18:04:29.151829    5904 ssh_runner.go:195] Run: rm -f paused
	I0910 18:04:29.287113    5904 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 18:04:29.292981    5904 out.go:177] * Done! kubectl is now configured to use "functional-879800" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.579554466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.579693676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.579959195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:01 functional-879800 dockerd[4232]: time="2024-09-10T18:04:01.183864383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:01 functional-879800 dockerd[4232]: time="2024-09-10T18:04:01.184172805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:01 functional-879800 dockerd[4232]: time="2024-09-10T18:04:01.184257111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:01 functional-879800 dockerd[4232]: time="2024-09-10T18:04:01.184545932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:03 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:04:03Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.142383799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.142727123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.142745425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.142869734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.149939241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.150201360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.150333870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.150634691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.183608960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.183799773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.183823275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.184056992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:04:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f1a968c18d127ecd7f15f98201cf008290b23e40a654359ea1cf579b756ef34f/resolv.conf as [nameserver 172.31.208.1]"
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.748853453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.749108971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.749218279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.749397892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	bc3783fda6976       cbb01a7bd410d       About a minute ago   Running             coredns                   2                   f1a968c18d127       coredns-6f6b679f8f-266t8
	eefc0cc191528       ad83b2ca7b09e       About a minute ago   Running             kube-proxy                2                   f2e8bfe81fcb7       kube-proxy-kpwfm
	7378f8354a266       6e38f40d628db       About a minute ago   Running             storage-provisioner       2                   1441db9085178       storage-provisioner
	95fffa728db0d       604f5db92eaa8       2 minutes ago        Running             kube-apiserver            2                   ca368efebfef8       kube-apiserver-functional-879800
	9979cd5778faa       2e96e5913fc06       2 minutes ago        Running             etcd                      2                   e1315bf73debf       etcd-functional-879800
	32232c2de1ef0       045733566833c       2 minutes ago        Running             kube-controller-manager   2                   b91bb2bb94961       kube-controller-manager-functional-879800
	b547c299d7dd0       604f5db92eaa8       2 minutes ago        Exited              kube-apiserver            1                   ca368efebfef8       kube-apiserver-functional-879800
	8a9895975328d       1766f54c897f0       2 minutes ago        Running             kube-scheduler            2                   70728a77068ef       kube-scheduler-functional-879800
	37ad76e1635d5       cbb01a7bd410d       2 minutes ago        Exited              coredns                   1                   ec5fefa9ec4e4       coredns-6f6b679f8f-266t8
	5156bce01a2cb       2e96e5913fc06       2 minutes ago        Exited              etcd                      1                   e3923829bb152       etcd-functional-879800
	d9b410da7bc72       045733566833c       2 minutes ago        Exited              kube-controller-manager   1                   976499ca237d0       kube-controller-manager-functional-879800
	07900b60b5ab8       ad83b2ca7b09e       2 minutes ago        Exited              kube-proxy                1                   6e715701f9a82       kube-proxy-kpwfm
	ab25d8c1a3f8b       6e38f40d628db       2 minutes ago        Exited              storage-provisioner       1                   ccc45d3582747       storage-provisioner
	6e134f080fb7a       1766f54c897f0       2 minutes ago        Exited              kube-scheduler            1                   a3e714586dcf6       kube-scheduler-functional-879800
	
	
	==> coredns [37ad76e1635d] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3bbd098fc214dc6dfa00c568b7eace025b603ea701d85ff6422fce82c71ce8b3031aaaf62adfe342d1a3f5f0bf1be6f08c4386d35c48cea8ace4e1727588bef9
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:50503 - 36786 "HINFO IN 6324254543031496928.100123732640770694. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.036062036s
	
	
	==> coredns [bc3783fda697] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3bbd098fc214dc6dfa00c568b7eace025b603ea701d85ff6422fce82c71ce8b3031aaaf62adfe342d1a3f5f0bf1be6f08c4386d35c48cea8ace4e1727588bef9
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43347 - 52739 "HINFO IN 6631340559963010093.5680630449949524915. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048740477s
	
	
	==> describe nodes <==
	Name:               functional-879800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-879800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=functional-879800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T18_01_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:01:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-879800
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 18:05:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 18:05:24 +0000   Tue, 10 Sep 2024 18:01:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 18:05:24 +0000   Tue, 10 Sep 2024 18:01:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 18:05:24 +0000   Tue, 10 Sep 2024 18:01:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 18:05:24 +0000   Tue, 10 Sep 2024 18:01:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.208.92
	  Hostname:    functional-879800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 2f0b6730df8a434e8001f868d358ca6c
	  System UUID:                c5d40e14-7e2f-ca45-ab59-c7107e73c06e
	  Boot ID:                    3192c81a-eea9-461e-a808-afadf40a3bf1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-266t8                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m39s
	  kube-system                 etcd-functional-879800                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m45s
	  kube-system                 kube-apiserver-functional-879800             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 kube-controller-manager-functional-879800    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 kube-proxy-kpwfm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-scheduler-functional-879800             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m37s                  kube-proxy       
	  Normal  Starting                 118s                   kube-proxy       
	  Normal  NodeHasSufficientPID     4m45s                  kubelet          Node functional-879800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m45s                  kubelet          Node functional-879800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s                  kubelet          Node functional-879800 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m45s                  kubelet          Starting kubelet.
	  Normal  NodeReady                4m43s                  kubelet          Node functional-879800 status is now: NodeReady
	  Normal  RegisteredNode           4m40s                  node-controller  Node functional-879800 event: Registered Node functional-879800 in Controller
	  Normal  Starting                 2m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m18s (x8 over 2m18s)  kubelet          Node functional-879800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m18s (x8 over 2m18s)  kubelet          Node functional-879800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m18s (x7 over 2m18s)  kubelet          Node functional-879800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           117s                   node-controller  Node functional-879800 event: Registered Node functional-879800 in Controller
	
	
	==> dmesg <==
	[  +6.120662] systemd-fstab-generator[1813]: Ignoring "noauto" option for root device
	[  +0.099034] kauditd_printk_skb: 48 callbacks suppressed
	[  +7.995388] systemd-fstab-generator[2221]: Ignoring "noauto" option for root device
	[  +0.130685] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.338542] systemd-fstab-generator[2341]: Ignoring "noauto" option for root device
	[  +0.180800] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.356181] kauditd_printk_skb: 88 callbacks suppressed
	[Sep10 18:02] kauditd_printk_skb: 10 callbacks suppressed
	[Sep10 18:03] systemd-fstab-generator[3740]: Ignoring "noauto" option for root device
	[  +0.537677] systemd-fstab-generator[3792]: Ignoring "noauto" option for root device
	[  +0.225711] systemd-fstab-generator[3804]: Ignoring "noauto" option for root device
	[  +0.257523] systemd-fstab-generator[3818]: Ignoring "noauto" option for root device
	[  +5.340147] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.877359] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.186798] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.179954] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.256144] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +1.037441] systemd-fstab-generator[4763]: Ignoring "noauto" option for root device
	[  +3.061765] kauditd_printk_skb: 214 callbacks suppressed
	[  +8.635556] kauditd_printk_skb: 31 callbacks suppressed
	[  +1.985505] systemd-fstab-generator[6074]: Ignoring "noauto" option for root device
	[ +13.788298] kauditd_printk_skb: 17 callbacks suppressed
	[Sep10 18:04] kauditd_printk_skb: 6 callbacks suppressed
	[ +16.694005] systemd-fstab-generator[6726]: Ignoring "noauto" option for root device
	[  +0.162430] kauditd_printk_skb: 23 callbacks suppressed
	
	
	==> etcd [5156bce01a2c] <==
	{"level":"info","ts":"2024-09-10T18:03:33.135304Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-10T18:03:33.154001Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"7c3d02542c34cb66","local-member-id":"d9c3d41d82b4ab9e","commit-index":526}
	{"level":"info","ts":"2024-09-10T18:03:33.154080Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9c3d41d82b4ab9e switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-10T18:03:33.154102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9c3d41d82b4ab9e became follower at term 2"}
	{"level":"info","ts":"2024-09-10T18:03:33.154118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft d9c3d41d82b4ab9e [peers: [], term: 2, commit: 526, applied: 0, lastindex: 526, lastterm: 2]"}
	{"level":"warn","ts":"2024-09-10T18:03:33.166305Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-09-10T18:03:33.200664Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":493}
	{"level":"info","ts":"2024-09-10T18:03:33.212096Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-09-10T18:03:33.229933Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"d9c3d41d82b4ab9e","timeout":"7s"}
	{"level":"info","ts":"2024-09-10T18:03:33.230526Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"d9c3d41d82b4ab9e"}
	{"level":"info","ts":"2024-09-10T18:03:33.230737Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"d9c3d41d82b4ab9e","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-10T18:03:33.231859Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:03:33.233932Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-10T18:03:33.234553Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-10T18:03:33.234763Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-10T18:03:33.235133Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-10T18:03:33.235741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9c3d41d82b4ab9e switched to configuration voters=(15691618749900958622)"}
	{"level":"info","ts":"2024-09-10T18:03:33.235923Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7c3d02542c34cb66","local-member-id":"d9c3d41d82b4ab9e","added-peer-id":"d9c3d41d82b4ab9e","added-peer-peer-urls":["https://172.31.208.92:2380"]}
	{"level":"info","ts":"2024-09-10T18:03:33.236227Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7c3d02542c34cb66","local-member-id":"d9c3d41d82b4ab9e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:03:33.236392Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:03:33.245731Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-10T18:03:33.247794Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"172.31.208.92:2380"}
	{"level":"info","ts":"2024-09-10T18:03:33.248080Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"172.31.208.92:2380"}
	{"level":"info","ts":"2024-09-10T18:03:33.249287Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"d9c3d41d82b4ab9e","initial-advertise-peer-urls":["https://172.31.208.92:2380"],"listen-peer-urls":["https://172.31.208.92:2380"],"advertise-client-urls":["https://172.31.208.92:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.31.208.92:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-10T18:03:33.251809Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [9979cd5778fa] <==
	{"level":"info","ts":"2024-09-10T18:04:00.790343Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7c3d02542c34cb66","local-member-id":"d9c3d41d82b4ab9e","added-peer-id":"d9c3d41d82b4ab9e","added-peer-peer-urls":["https://172.31.208.92:2380"]}
	{"level":"info","ts":"2024-09-10T18:04:00.790632Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7c3d02542c34cb66","local-member-id":"d9c3d41d82b4ab9e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:04:00.790750Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:04:00.794690Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:04:00.797801Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-10T18:04:00.798141Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"172.31.208.92:2380"}
	{"level":"info","ts":"2024-09-10T18:04:00.798442Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"172.31.208.92:2380"}
	{"level":"info","ts":"2024-09-10T18:04:00.800166Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"d9c3d41d82b4ab9e","initial-advertise-peer-urls":["https://172.31.208.92:2380"],"listen-peer-urls":["https://172.31.208.92:2380"],"advertise-client-urls":["https://172.31.208.92:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.31.208.92:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-10T18:04:00.800337Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-10T18:04:02.059785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9c3d41d82b4ab9e is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-10T18:04:02.059993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9c3d41d82b4ab9e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-10T18:04:02.060141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9c3d41d82b4ab9e received MsgPreVoteResp from d9c3d41d82b4ab9e at term 2"}
	{"level":"info","ts":"2024-09-10T18:04:02.060222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9c3d41d82b4ab9e became candidate at term 3"}
	{"level":"info","ts":"2024-09-10T18:04:02.060275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9c3d41d82b4ab9e received MsgVoteResp from d9c3d41d82b4ab9e at term 3"}
	{"level":"info","ts":"2024-09-10T18:04:02.060328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9c3d41d82b4ab9e became leader at term 3"}
	{"level":"info","ts":"2024-09-10T18:04:02.060411Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d9c3d41d82b4ab9e elected leader d9c3d41d82b4ab9e at term 3"}
	{"level":"info","ts":"2024-09-10T18:04:02.066924Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d9c3d41d82b4ab9e","local-member-attributes":"{Name:functional-879800 ClientURLs:[https://172.31.208.92:2379]}","request-path":"/0/members/d9c3d41d82b4ab9e/attributes","cluster-id":"7c3d02542c34cb66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T18:04:02.066971Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:04:02.067121Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:04:02.068031Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:04:02.068903Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:04:02.072469Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.31.208.92:2379"}
	{"level":"info","ts":"2024-09-10T18:04:02.072848Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T18:04:02.072955Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-10T18:04:02.073244Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:06:03 up 6 min,  0 users,  load average: 0.73, 0.68, 0.33
	Linux functional-879800 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [95fffa728db0] <==
	I0910 18:04:03.527730       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0910 18:04:03.527740       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0910 18:04:03.527845       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0910 18:04:03.529245       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0910 18:04:03.529397       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0910 18:04:03.529676       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0910 18:04:03.530747       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0910 18:04:03.531306       1 aggregator.go:171] initial CRD sync complete...
	I0910 18:04:03.531478       1 autoregister_controller.go:144] Starting autoregister controller
	I0910 18:04:03.531721       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0910 18:04:03.531817       1 cache.go:39] Caches are synced for autoregister controller
	I0910 18:04:03.532702       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0910 18:04:03.534441       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0910 18:04:03.536680       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0910 18:04:03.536745       1 policy_source.go:224] refreshing policies
	I0910 18:04:03.607412       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0910 18:04:04.471632       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0910 18:04:05.250746       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.31.208.92]
	I0910 18:04:05.256063       1 controller.go:615] quota admission added evaluator for: endpoints
	I0910 18:04:05.272310       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0910 18:04:05.419358       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0910 18:04:05.439532       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0910 18:04:05.571309       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0910 18:04:05.631143       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0910 18:04:05.672958       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [b547c299d7dd] <==
	I0910 18:03:38.640804       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:03:39.048193       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0910 18:03:39.049350       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:03:39.049910       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0910 18:03:39.059665       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0910 18:03:39.063274       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0910 18:03:39.063294       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0910 18:03:39.063562       1 instance.go:232] Using reconciler: lease
	W0910 18:03:39.064602       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:03:40.050757       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:03:40.051011       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:03:40.065314       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:03:41.364243       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:03:41.551877       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:03:41.807756       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:03:44.428382       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:03:44.491702       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:03:44.824807       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:03:48.500093       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:03:48.524252       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:03:48.998079       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:03:54.853202       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:03:55.089400       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:03:55.374386       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0910 18:03:59.064643       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [32232c2de1ef] <==
	I0910 18:04:06.832342       1 shared_informer.go:320] Caches are synced for PVC protection
	I0910 18:04:06.833612       1 shared_informer.go:320] Caches are synced for expand
	I0910 18:04:06.836090       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0910 18:04:06.837741       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0910 18:04:06.844294       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0910 18:04:06.846982       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0910 18:04:06.849825       1 shared_informer.go:320] Caches are synced for PV protection
	I0910 18:04:06.852666       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0910 18:04:06.855231       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0910 18:04:06.863283       1 shared_informer.go:320] Caches are synced for ephemeral
	I0910 18:04:06.867740       1 shared_informer.go:320] Caches are synced for endpoint
	I0910 18:04:06.932117       1 shared_informer.go:320] Caches are synced for deployment
	I0910 18:04:06.935681       1 shared_informer.go:320] Caches are synced for disruption
	I0910 18:04:06.945484       1 shared_informer.go:320] Caches are synced for resource quota
	I0910 18:04:06.974323       1 shared_informer.go:320] Caches are synced for resource quota
	I0910 18:04:07.032896       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0910 18:04:07.056044       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0910 18:04:07.056161       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0910 18:04:07.056146       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0910 18:04:07.059101       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0910 18:04:07.447379       1 shared_informer.go:320] Caches are synced for garbage collector
	I0910 18:04:07.447479       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0910 18:04:07.472342       1 shared_informer.go:320] Caches are synced for garbage collector
	I0910 18:04:54.532468       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-879800"
	I0910 18:05:24.933865       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-879800"
	
	
	==> kube-controller-manager [d9b410da7bc7] <==
	
	
	==> kube-proxy [07900b60b5ab] <==
	
	
	==> kube-proxy [eefc0cc19152] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 18:04:04.515099       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 18:04:04.536958       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.31.208.92"]
	E0910 18:04:04.537008       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 18:04:04.607943       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 18:04:04.607987       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 18:04:04.608011       1 server_linux.go:169] "Using iptables Proxier"
	I0910 18:04:04.611810       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 18:04:04.612196       1 server.go:483] "Version info" version="v1.31.0"
	I0910 18:04:04.612225       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:04:04.614138       1 config.go:197] "Starting service config controller"
	I0910 18:04:04.614179       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 18:04:04.614206       1 config.go:104] "Starting endpoint slice config controller"
	I0910 18:04:04.614213       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 18:04:04.615553       1 config.go:326] "Starting node config controller"
	I0910 18:04:04.615686       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 18:04:04.715468       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 18:04:04.715540       1 shared_informer.go:320] Caches are synced for service config
	I0910 18:04:04.715796       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6e134f080fb7] <==
	I0910 18:03:34.305725       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [8a9895975328] <==
	W0910 18:04:00.077177       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://172.31.208.92:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:47902->172.31.208.92:8441: read: connection reset by peer
	W0910 18:04:00.077272       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.208.92:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:47918->172.31.208.92:8441: read: connection reset by peer
	E0910 18:04:00.077279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://172.31.208.92:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:47902->172.31.208.92:8441: read: connection reset by peer" logger="UnhandledError"
	E0910 18:04:00.077307       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.208.92:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:47918->172.31.208.92:8441: read: connection reset by peer" logger="UnhandledError"
	W0910 18:04:00.077413       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:47940->172.31.208.92:8441: read: connection reset by peer
	W0910 18:04:00.077422       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://172.31.208.92:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:47886->172.31.208.92:8441: read: connection reset by peer
	E0910 18:04:00.077453       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://172.31.208.92:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:47940->172.31.208.92:8441: read: connection reset by peer" logger="UnhandledError"
	E0910 18:04:00.077464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://172.31.208.92:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:47886->172.31.208.92:8441: read: connection reset by peer" logger="UnhandledError"
	W0910 18:04:00.077531       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://172.31.208.92:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:47960->172.31.208.92:8441: read: connection reset by peer
	E0910 18:04:00.077553       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://172.31.208.92:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:47960->172.31.208.92:8441: read: connection reset by peer" logger="UnhandledError"
	W0910 18:04:00.077768       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://172.31.208.92:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:47964->172.31.208.92:8441: read: connection reset by peer
	W0910 18:04:00.077782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://172.31.208.92:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:47896->172.31.208.92:8441: read: connection reset by peer
	E0910 18:04:00.077827       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://172.31.208.92:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:47896->172.31.208.92:8441: read: connection reset by peer" logger="UnhandledError"
	E0910 18:04:00.077794       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://172.31.208.92:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:47964->172.31.208.92:8441: read: connection reset by peer" logger="UnhandledError"
	W0910 18:04:00.077905       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://172.31.208.92:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:47994->172.31.208.92:8441: read: connection reset by peer
	E0910 18:04:00.077953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://172.31.208.92:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:47994->172.31.208.92:8441: read: connection reset by peer" logger="UnhandledError"
	W0910 18:04:00.078035       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://172.31.208.92:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:47878->172.31.208.92:8441: read: connection reset by peer
	E0910 18:04:00.078055       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://172.31.208.92:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:47878->172.31.208.92:8441: read: connection reset by peer" logger="UnhandledError"
	W0910 18:04:00.078142       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://172.31.208.92:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:47846->172.31.208.92:8441: read: connection reset by peer
	W0910 18:04:00.078147       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://172.31.208.92:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:48038->172.31.208.92:8441: read: connection reset by peer
	E0910 18:04:00.078162       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://172.31.208.92:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:47846->172.31.208.92:8441: read: connection reset by peer" logger="UnhandledError"
	E0910 18:04:00.078183       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://172.31.208.92:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:48038->172.31.208.92:8441: read: connection reset by peer" logger="UnhandledError"
	W0910 18:04:00.078182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.208.92:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:48046->172.31.208.92:8441: read: connection reset by peer
	E0910 18:04:00.078212       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.208.92:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 172.31.208.92:8441: connect: connection refused - error from a previous attempt: read tcp 172.31.208.92:48046->172.31.208.92:8441: read: connection reset by peer" logger="UnhandledError"
	I0910 18:04:10.684537       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 18:04:01 functional-879800 kubelet[6081]: E0910 18:04:01.277896    6081 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused" interval="1.6s"
	Sep 10 18:04:01 functional-879800 kubelet[6081]: I0910 18:04:01.883683    6081 kubelet_node_status.go:72] "Attempting to register node" node="functional-879800"
	Sep 10 18:04:03 functional-879800 kubelet[6081]: I0910 18:04:03.584646    6081 apiserver.go:52] "Watching apiserver"
	Sep 10 18:04:03 functional-879800 kubelet[6081]: I0910 18:04:03.597226    6081 kubelet_node_status.go:111] "Node was previously registered" node="functional-879800"
	Sep 10 18:04:03 functional-879800 kubelet[6081]: I0910 18:04:03.597388    6081 kubelet_node_status.go:75] "Successfully registered node" node="functional-879800"
	Sep 10 18:04:03 functional-879800 kubelet[6081]: I0910 18:04:03.597445    6081 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 10 18:04:03 functional-879800 kubelet[6081]: I0910 18:04:03.598861    6081 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 10 18:04:03 functional-879800 kubelet[6081]: I0910 18:04:03.605918    6081 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 10 18:04:03 functional-879800 kubelet[6081]: I0910 18:04:03.702748    6081 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/180e0128-a2a6-4313-9a60-36e9992df753-tmp\") pod \"storage-provisioner\" (UID: \"180e0128-a2a6-4313-9a60-36e9992df753\") " pod="kube-system/storage-provisioner"
	Sep 10 18:04:03 functional-879800 kubelet[6081]: I0910 18:04:03.702905    6081 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b755e57-fe35-4419-b5bc-697f67cc9cf8-xtables-lock\") pod \"kube-proxy-kpwfm\" (UID: \"2b755e57-fe35-4419-b5bc-697f67cc9cf8\") " pod="kube-system/kube-proxy-kpwfm"
	Sep 10 18:04:03 functional-879800 kubelet[6081]: I0910 18:04:03.702954    6081 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b755e57-fe35-4419-b5bc-697f67cc9cf8-lib-modules\") pod \"kube-proxy-kpwfm\" (UID: \"2b755e57-fe35-4419-b5bc-697f67cc9cf8\") " pod="kube-system/kube-proxy-kpwfm"
	Sep 10 18:04:03 functional-879800 kubelet[6081]: I0910 18:04:03.896221    6081 scope.go:117] "RemoveContainer" containerID="ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19"
	Sep 10 18:04:03 functional-879800 kubelet[6081]: I0910 18:04:03.897000    6081 scope.go:117] "RemoveContainer" containerID="07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4"
	Sep 10 18:04:04 functional-879800 kubelet[6081]: I0910 18:04:04.425220    6081 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1a968c18d127ecd7f15f98201cf008290b23e40a654359ea1cf579b756ef34f"
	Sep 10 18:04:04 functional-879800 kubelet[6081]: E0910 18:04:04.504724    6081 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-functional-879800\" already exists" pod="kube-system/kube-apiserver-functional-879800"
	Sep 10 18:04:45 functional-879800 kubelet[6081]: E0910 18:04:45.657868    6081 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 18:04:45 functional-879800 kubelet[6081]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 18:04:45 functional-879800 kubelet[6081]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 18:04:45 functional-879800 kubelet[6081]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 18:04:45 functional-879800 kubelet[6081]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 18:05:45 functional-879800 kubelet[6081]: E0910 18:05:45.656022    6081 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 18:05:45 functional-879800 kubelet[6081]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 18:05:45 functional-879800 kubelet[6081]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 18:05:45 functional-879800 kubelet[6081]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 18:05:45 functional-879800 kubelet[6081]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [7378f8354a26] <==
	I0910 18:04:04.264042       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 18:04:04.305275       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 18:04:04.305366       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0910 18:04:21.724194       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0910 18:04:21.724391       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-879800_8fb47dd9-b024-4507-9fc1-50a37e243ac1!
	I0910 18:04:21.724547       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"60d0adf6-ec97-439b-9b24-48905cd28ed5", APIVersion:"v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-879800_8fb47dd9-b024-4507-9fc1-50a37e243ac1 became leader
	I0910 18:04:21.825275       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-879800_8fb47dd9-b024-4507-9fc1-50a37e243ac1!
	
	
	==> storage-provisioner [ab25d8c1a3f8] <==
	I0910 18:03:33.460745       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0910 18:03:33.462674       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-879800 -n functional-879800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-879800 -n functional-879800: (10.6933678s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-879800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (29.84s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (269.63s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-879800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-879800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 90 (2m18.4509547s)

                                                
                                                
-- stdout --
	* [functional-879800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "functional-879800" primary control-plane node in "functional-879800" cluster
	* Updating the running hyperv "functional-879800" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 10 18:00:18 functional-879800 systemd[1]: Starting Docker Application Container Engine...
	Sep 10 18:00:18 functional-879800 dockerd[654]: time="2024-09-10T18:00:18.048484724Z" level=info msg="Starting up"
	Sep 10 18:00:18 functional-879800 dockerd[654]: time="2024-09-10T18:00:18.049311817Z" level=info msg="containerd not running, starting managed containerd"
	Sep 10 18:00:18 functional-879800 dockerd[654]: time="2024-09-10T18:00:18.050378195Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=660
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.085669997Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113152633Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113198249Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113255169Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113273076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113351804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113365408Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113555476Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113593289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113608394Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113620399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113738040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113952016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.117262689Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.117369927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.117542988Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.117638322Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.117800380Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.117961036Z" level=info msg="metadata content store policy set" policy=shared
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.143566607Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.143812694Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.143852008Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.143870115Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.143885420Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144004162Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144431814Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144578366Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144731920Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144769633Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144785139Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144800544Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144817951Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144833256Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144852263Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144874671Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144890676Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144903781Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144926489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144952798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144968904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144985010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144998314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145012219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145026424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145041330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145055335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145071540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145083945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145098250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145125159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145150368Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145174477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145188082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145200886Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145248603Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145268610Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145281715Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145295520Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145308424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145321929Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145343637Z" level=info msg="NRI interface is disabled by configuration."
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145564815Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145769388Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145831810Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145867622Z" level=info msg="containerd successfully booted in 0.061994s"
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.121888725Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.154996562Z" level=info msg="Loading containers: start."
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.319648547Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.537934908Z" level=info msg="Loading containers: done."
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.562684996Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.563059323Z" level=info msg="Daemon has completed initialization"
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.675990996Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 10 18:00:19 functional-879800 systemd[1]: Started Docker Application Container Engine.
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.677273131Z" level=info msg="API listen on [::]:2376"
	Sep 10 18:00:47 functional-879800 systemd[1]: Stopping Docker Application Container Engine...
	Sep 10 18:00:47 functional-879800 dockerd[654]: time="2024-09-10T18:00:47.375581804Z" level=info msg="Processing signal 'terminated'"
	Sep 10 18:00:47 functional-879800 dockerd[654]: time="2024-09-10T18:00:47.377311327Z" level=info msg="Daemon shutdown complete"
	Sep 10 18:00:47 functional-879800 dockerd[654]: time="2024-09-10T18:00:47.377382432Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 10 18:00:47 functional-879800 dockerd[654]: time="2024-09-10T18:00:47.377399033Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 10 18:00:47 functional-879800 dockerd[654]: time="2024-09-10T18:00:47.377426335Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Sep 10 18:00:48 functional-879800 systemd[1]: docker.service: Deactivated successfully.
	Sep 10 18:00:48 functional-879800 systemd[1]: Stopped Docker Application Container Engine.
	Sep 10 18:00:48 functional-879800 systemd[1]: Starting Docker Application Container Engine...
	Sep 10 18:00:48 functional-879800 dockerd[1071]: time="2024-09-10T18:00:48.429522254Z" level=info msg="Starting up"
	Sep 10 18:00:48 functional-879800 dockerd[1071]: time="2024-09-10T18:00:48.431008259Z" level=info msg="containerd not running, starting managed containerd"
	Sep 10 18:00:48 functional-879800 dockerd[1071]: time="2024-09-10T18:00:48.432192643Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1077
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.459542277Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.483918202Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.483975206Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484022209Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484041010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484077013Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484097014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484291928Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484426638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484453740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484468941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484501543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484661254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.487483054Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.487608563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.487911684Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.487954587Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.487995390Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.488021392Z" level=info msg="metadata content store policy set" policy=shared
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.488424620Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.488479324Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.488499926Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.488520527Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.488550029Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.488615634Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.489261880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.489448693Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.489569201Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.489694710Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.489718112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.489736913Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490086038Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490114540Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490146942Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490166844Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490183545Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490199646Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490225648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490251750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490274451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490293053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490318254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490335756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490352257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490374458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490393260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490413461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490429562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490445963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490463665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490485466Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490512768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490632877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490655178Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490865893Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491038605Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491056507Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491073508Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491087909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491113411Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491129912Z" level=info msg="NRI interface is disabled by configuration."
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491451335Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491783358Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.492068478Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.492180986Z" level=info msg="containerd successfully booted in 0.033661s"
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.478401846Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.503818243Z" level=info msg="Loading containers: start."
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.627367383Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.742364917Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.836564080Z" level=info msg="Loading containers: done."
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.856624799Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.856752008Z" level=info msg="Daemon has completed initialization"
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.901724089Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.901914602Z" level=info msg="API listen on [::]:2376"
	Sep 10 18:00:49 functional-879800 systemd[1]: Started Docker Application Container Engine.
	Sep 10 18:00:58 functional-879800 systemd[1]: Stopping Docker Application Container Engine...
	Sep 10 18:00:58 functional-879800 dockerd[1071]: time="2024-09-10T18:00:58.351344467Z" level=info msg="Processing signal 'terminated'"
	Sep 10 18:00:58 functional-879800 dockerd[1071]: time="2024-09-10T18:00:58.352684961Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 10 18:00:58 functional-879800 dockerd[1071]: time="2024-09-10T18:00:58.352996783Z" level=info msg="Daemon shutdown complete"
	Sep 10 18:00:58 functional-879800 dockerd[1071]: time="2024-09-10T18:00:58.353065188Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 10 18:00:58 functional-879800 dockerd[1071]: time="2024-09-10T18:00:58.353083690Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 10 18:00:59 functional-879800 systemd[1]: docker.service: Deactivated successfully.
	Sep 10 18:00:59 functional-879800 systemd[1]: Stopped Docker Application Container Engine.
	Sep 10 18:00:59 functional-879800 systemd[1]: Starting Docker Application Container Engine...
	Sep 10 18:00:59 functional-879800 dockerd[1425]: time="2024-09-10T18:00:59.401497648Z" level=info msg="Starting up"
	Sep 10 18:00:59 functional-879800 dockerd[1425]: time="2024-09-10T18:00:59.402190497Z" level=info msg="containerd not running, starting managed containerd"
	Sep 10 18:00:59 functional-879800 dockerd[1425]: time="2024-09-10T18:00:59.403078760Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1431
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.428368049Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.449998479Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450096286Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450129688Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450142389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450164791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450174991Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450305701Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450395907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450413008Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450422409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450452311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450544818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.453475725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.453561631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.453687940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.453768346Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.453918756Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454018963Z" level=info msg="metadata content store policy set" policy=shared
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454243479Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454324385Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454344286Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454358687Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454371288Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454409991Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454713813Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454920227Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454938528Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454951729Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454965530Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454978831Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454991632Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455008133Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455022934Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455040736Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455054637Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455066637Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455086439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455104040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455117041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455130342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455142643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455155844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455168445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455181646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455194647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455209148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455220848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455233149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455246050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455266652Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455286253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455299754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455311255Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455351858Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455369459Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455382260Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455395761Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455406762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455419762Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455430363Z" level=info msg="NRI interface is disabled by configuration."
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455625977Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455965501Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.456012404Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.456047007Z" level=info msg="containerd successfully booted in 0.028476s"
	Sep 10 18:01:00 functional-879800 dockerd[1425]: time="2024-09-10T18:01:00.554280190Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.339558604Z" level=info msg="Loading containers: start."
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.470992401Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.589342572Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.687264299Z" level=info msg="Loading containers: done."
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.712100256Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.712238065Z" level=info msg="Daemon has completed initialization"
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.753412878Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 10 18:01:03 functional-879800 systemd[1]: Started Docker Application Container Engine.
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.757907896Z" level=info msg="API listen on [::]:2376"
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.843562023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.843615227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.843627628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.843743036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.881096164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.881214473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.881246876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.881574000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.918963031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.919037037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.919054638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.923719891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.926194878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.926268884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.926345190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.926692516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.293309781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.293378786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.293422989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.293516496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.380781776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.380924186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.381002392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.381188906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.424221851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.424448368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.424528474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.424720389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.427031863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.427122170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.427138071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.427705214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.596259546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.596394956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.596410457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.596594870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.977439827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.977546135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.977564036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.977701946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.012423876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.012517682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.012536184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.012641591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.036026293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.036651138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.036822651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.037129473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.681454980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.681601191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.681638693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.681841508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.716309594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.716465805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.716490007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.716597815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.277424206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.277585217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.277609619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.277749028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.549402567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.549468572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.549487373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.549601481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:35 functional-879800 dockerd[1425]: time="2024-09-10T18:01:35.869111260Z" level=info msg="ignoring event" container=f8fdd3b265468b67be1adcf36a8550e3434d52705793e41c0a4aa20a6347af10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:01:35 functional-879800 dockerd[1431]: time="2024-09-10T18:01:35.870287741Z" level=info msg="shim disconnected" id=f8fdd3b265468b67be1adcf36a8550e3434d52705793e41c0a4aa20a6347af10 namespace=moby
	Sep 10 18:01:35 functional-879800 dockerd[1431]: time="2024-09-10T18:01:35.870471854Z" level=warning msg="cleaning up after shim disconnected" id=f8fdd3b265468b67be1adcf36a8550e3434d52705793e41c0a4aa20a6347af10 namespace=moby
	Sep 10 18:01:35 functional-879800 dockerd[1431]: time="2024-09-10T18:01:35.870482255Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:01:36 functional-879800 dockerd[1431]: time="2024-09-10T18:01:36.047551699Z" level=info msg="shim disconnected" id=0fbb75cf9db50f56284994377744954858ba1a907974bd24a03fe204d61c7a95 namespace=moby
	Sep 10 18:01:36 functional-879800 dockerd[1425]: time="2024-09-10T18:01:36.047769414Z" level=info msg="ignoring event" container=0fbb75cf9db50f56284994377744954858ba1a907974bd24a03fe204d61c7a95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:01:36 functional-879800 dockerd[1431]: time="2024-09-10T18:01:36.047937726Z" level=warning msg="cleaning up after shim disconnected" id=0fbb75cf9db50f56284994377744954858ba1a907974bd24a03fe204d61c7a95 namespace=moby
	Sep 10 18:01:36 functional-879800 dockerd[1431]: time="2024-09-10T18:01:36.047999430Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:16 functional-879800 systemd[1]: Stopping Docker Application Container Engine...
	Sep 10 18:03:16 functional-879800 dockerd[1425]: time="2024-09-10T18:03:16.937916216Z" level=info msg="Processing signal 'terminated'"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.104463094Z" level=info msg="shim disconnected" id=b0e69451632d78db5e981243b5f1d242d579a9cb2b3e395174eb8c40be18ef68 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.105821089Z" level=info msg="ignoring event" container=b0e69451632d78db5e981243b5f1d242d579a9cb2b3e395174eb8c40be18ef68 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.106062106Z" level=warning msg="cleaning up after shim disconnected" id=b0e69451632d78db5e981243b5f1d242d579a9cb2b3e395174eb8c40be18ef68 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.106172314Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.155843996Z" level=info msg="ignoring event" container=7acd3a1687c63b8c19da613fcc79f3ba70ee7bdccf6bfa44b405956d9a17e01a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.159695566Z" level=info msg="shim disconnected" id=7acd3a1687c63b8c19da613fcc79f3ba70ee7bdccf6bfa44b405956d9a17e01a namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.159819375Z" level=warning msg="cleaning up after shim disconnected" id=7acd3a1687c63b8c19da613fcc79f3ba70ee7bdccf6bfa44b405956d9a17e01a namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.159869979Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.162078434Z" level=info msg="ignoring event" container=19fb530e5214fd7fb18cc260293e9cea4c56f4ca100b000374ac5e148404a252 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.166101616Z" level=info msg="shim disconnected" id=19fb530e5214fd7fb18cc260293e9cea4c56f4ca100b000374ac5e148404a252 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.166157220Z" level=warning msg="cleaning up after shim disconnected" id=19fb530e5214fd7fb18cc260293e9cea4c56f4ca100b000374ac5e148404a252 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.166167620Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.176262428Z" level=info msg="ignoring event" container=c0055974541d4cbd1c38c909cedf16c2ed8f6961f77502ddc6a666599fa123cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.185350765Z" level=info msg="shim disconnected" id=c0055974541d4cbd1c38c909cedf16c2ed8f6961f77502ddc6a666599fa123cc namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.185385668Z" level=warning msg="cleaning up after shim disconnected" id=c0055974541d4cbd1c38c909cedf16c2ed8f6961f77502ddc6a666599fa123cc namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.185396969Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.186332834Z" level=info msg="ignoring event" container=443d0b1bfd8f94fe9a5f42d5018ef70e241d0c811e2829414b69c6a7a70ff38e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.188048755Z" level=info msg="shim disconnected" id=443d0b1bfd8f94fe9a5f42d5018ef70e241d0c811e2829414b69c6a7a70ff38e namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.188830609Z" level=warning msg="cleaning up after shim disconnected" id=443d0b1bfd8f94fe9a5f42d5018ef70e241d0c811e2829414b69c6a7a70ff38e namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.188906315Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.200281212Z" level=info msg="ignoring event" container=f4c45e58e5ad749c5b6ac4af995a6c64ffc1454bdb9e450711e7d05383ac6e88 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.200529330Z" level=info msg="shim disconnected" id=f4c45e58e5ad749c5b6ac4af995a6c64ffc1454bdb9e450711e7d05383ac6e88 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.200784748Z" level=warning msg="cleaning up after shim disconnected" id=f4c45e58e5ad749c5b6ac4af995a6c64ffc1454bdb9e450711e7d05383ac6e88 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.200839551Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.217470818Z" level=info msg="shim disconnected" id=2fded56066ba805dd117038086ed87c1c07f3b8f07964442e237b58e1f5c1af9 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.217710534Z" level=warning msg="cleaning up after shim disconnected" id=2fded56066ba805dd117038086ed87c1c07f3b8f07964442e237b58e1f5c1af9 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.217740836Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.219247142Z" level=info msg="ignoring event" container=2fded56066ba805dd117038086ed87c1c07f3b8f07964442e237b58e1f5c1af9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.225671793Z" level=info msg="shim disconnected" id=afe36a9485ea5c3fb74437c46ad24d9fc5481951b877b6ee8d2994f6a2ddc7d8 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.225916110Z" level=warning msg="cleaning up after shim disconnected" id=afe36a9485ea5c3fb74437c46ad24d9fc5481951b877b6ee8d2994f6a2ddc7d8 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.226028518Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.247117296Z" level=info msg="ignoring event" container=afe36a9485ea5c3fb74437c46ad24d9fc5481951b877b6ee8d2994f6a2ddc7d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.247437619Z" level=info msg="ignoring event" container=d1a139975a680fdec8a1d64f0b916ac536e20a4dc7a0300b5fa144c176679c27 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.251142178Z" level=info msg="shim disconnected" id=d1a139975a680fdec8a1d64f0b916ac536e20a4dc7a0300b5fa144c176679c27 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.251512604Z" level=warning msg="cleaning up after shim disconnected" id=d1a139975a680fdec8a1d64f0b916ac536e20a4dc7a0300b5fa144c176679c27 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.251636113Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.276859782Z" level=info msg="shim disconnected" id=0e83830f0f0ea0b4df02286ef66171d35a7aba6988e545ec7dbdd8a0e7f72148 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.277179904Z" level=warning msg="cleaning up after shim disconnected" id=0e83830f0f0ea0b4df02286ef66171d35a7aba6988e545ec7dbdd8a0e7f72148 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.277504227Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.283016913Z" level=info msg="shim disconnected" id=c72de8d88c7aba63faf855d522ae12d6f047e03d0461df47e86e59ce16212e7b namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.283534650Z" level=warning msg="cleaning up after shim disconnected" id=c72de8d88c7aba63faf855d522ae12d6f047e03d0461df47e86e59ce16212e7b namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.283722663Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.286985492Z" level=info msg="shim disconnected" id=9040449f00775036426d6df47306914e57c2bc2fba000ed833d7019bf407398c namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.287105900Z" level=warning msg="cleaning up after shim disconnected" id=9040449f00775036426d6df47306914e57c2bc2fba000ed833d7019bf407398c namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.287116101Z" level=info msg="ignoring event" container=0e83830f0f0ea0b4df02286ef66171d35a7aba6988e545ec7dbdd8a0e7f72148 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.287148703Z" level=info msg="ignoring event" container=c72de8d88c7aba63faf855d522ae12d6f047e03d0461df47e86e59ce16212e7b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.287236709Z" level=info msg="ignoring event" container=9040449f00775036426d6df47306914e57c2bc2fba000ed833d7019bf407398c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.287482627Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.379983012Z" level=warning msg="cleanup warnings time=\"2024-09-10T18:03:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 10 18:03:22 functional-879800 dockerd[1431]: time="2024-09-10T18:03:22.099065101Z" level=info msg="shim disconnected" id=0fe7875f1ea8d61fc0c9e87d00b4d91469ae530abf0233a722959f81e0194889 namespace=moby
	Sep 10 18:03:22 functional-879800 dockerd[1425]: time="2024-09-10T18:03:22.099724747Z" level=info msg="ignoring event" container=0fe7875f1ea8d61fc0c9e87d00b4d91469ae530abf0233a722959f81e0194889 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:22 functional-879800 dockerd[1431]: time="2024-09-10T18:03:22.100757719Z" level=warning msg="cleaning up after shim disconnected" id=0fe7875f1ea8d61fc0c9e87d00b4d91469ae530abf0233a722959f81e0194889 namespace=moby
	Sep 10 18:03:22 functional-879800 dockerd[1431]: time="2024-09-10T18:03:22.101024838Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:27 functional-879800 dockerd[1425]: time="2024-09-10T18:03:27.012561677Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=935ff5cd94293207d9b8ff0882b7a61b6fd9dab977003e5d0bbe330c64fb0572
	Sep 10 18:03:27 functional-879800 dockerd[1425]: time="2024-09-10T18:03:27.051920467Z" level=info msg="ignoring event" container=935ff5cd94293207d9b8ff0882b7a61b6fd9dab977003e5d0bbe330c64fb0572 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:27 functional-879800 dockerd[1431]: time="2024-09-10T18:03:27.052656729Z" level=info msg="shim disconnected" id=935ff5cd94293207d9b8ff0882b7a61b6fd9dab977003e5d0bbe330c64fb0572 namespace=moby
	Sep 10 18:03:27 functional-879800 dockerd[1431]: time="2024-09-10T18:03:27.052756037Z" level=warning msg="cleaning up after shim disconnected" id=935ff5cd94293207d9b8ff0882b7a61b6fd9dab977003e5d0bbe330c64fb0572 namespace=moby
	Sep 10 18:03:27 functional-879800 dockerd[1431]: time="2024-09-10T18:03:27.052802441Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:27 functional-879800 dockerd[1425]: time="2024-09-10T18:03:27.115701599Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 10 18:03:27 functional-879800 dockerd[1425]: time="2024-09-10T18:03:27.116521267Z" level=info msg="Daemon shutdown complete"
	Sep 10 18:03:27 functional-879800 dockerd[1425]: time="2024-09-10T18:03:27.116719684Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 10 18:03:27 functional-879800 dockerd[1425]: time="2024-09-10T18:03:27.116799091Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 10 18:03:28 functional-879800 systemd[1]: docker.service: Deactivated successfully.
	Sep 10 18:03:28 functional-879800 systemd[1]: Stopped Docker Application Container Engine.
	Sep 10 18:03:28 functional-879800 systemd[1]: docker.service: Consumed 4.993s CPU time.
	Sep 10 18:03:28 functional-879800 systemd[1]: Starting Docker Application Container Engine...
	Sep 10 18:03:28 functional-879800 dockerd[4225]: time="2024-09-10T18:03:28.177678725Z" level=info msg="Starting up"
	Sep 10 18:03:28 functional-879800 dockerd[4225]: time="2024-09-10T18:03:28.178393684Z" level=info msg="containerd not running, starting managed containerd"
	Sep 10 18:03:28 functional-879800 dockerd[4225]: time="2024-09-10T18:03:28.179695692Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=4232
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.207359481Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237220152Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237314660Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237345862Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237360364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237381065Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237390766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237533778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237671289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237687191Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237701792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237722194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237908509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.240594931Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.240670738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.240887856Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.240905057Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.240941960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.240959161Z" level=info msg="metadata content store policy set" policy=shared
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241119775Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241144677Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241161778Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241175279Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241187380Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241226984Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241424400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241648818Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241672420Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241685922Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241697423Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241709323Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241719824Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241734226Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241747927Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241758628Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241769428Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241857236Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241881338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241894339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241905440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241916441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241929442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241945843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241968845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241982446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241993947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242006348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242017149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242028350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242040051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242054052Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242071353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242081954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242091855Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242126558Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242141659Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242151360Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242161861Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242170662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242181063Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242190263Z" level=info msg="NRI interface is disabled by configuration."
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242404181Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242441984Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242473787Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242488388Z" level=info msg="containerd successfully booted in 0.036034s"
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.221963311Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.251989072Z" level=info msg="Loading containers: start."
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.477893887Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.597202766Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.690319598Z" level=info msg="Loading containers: done."
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.713867228Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.714011940Z" level=info msg="Daemon has completed initialization"
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.759721887Z" level=info msg="API listen on [::]:2376"
	Sep 10 18:03:29 functional-879800 systemd[1]: Started Docker Application Container Engine.
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.760516852Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.474834842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.477836684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.477986196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.478476035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.510897746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.511034857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.511069460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.511269876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.655061155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.655210967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.655245270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.655529393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.675075567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.675276783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.675387892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.675715318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.709728657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.709965176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.709997579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.713421555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.746807643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.750921375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.751140292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.751395413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.994018751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.994220867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.994425083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.994619599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.435937252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.436180472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.436302481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.436537800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.630888601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.631197725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.631278432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.631740769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.668391192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.668465598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.668552605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.668683415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.773245454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.774814880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.774922088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.775131905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.943943668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.944004773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.944020374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.944179787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.488015566Z" level=info msg="shim disconnected" id=ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19 namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.488335787Z" level=warning msg="cleaning up after shim disconnected" id=ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19 namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.488651308Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4225]: time="2024-09-10T18:03:33.490343523Z" level=info msg="ignoring event" container=ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.872559495Z" level=info msg="shim disconnected" id=ec5fefa9ec4e4d3e503ab66ef7d7ee0d763b84ac316e50857a591b02760be35a namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.873405352Z" level=warning msg="cleaning up after shim disconnected" id=ec5fefa9ec4e4d3e503ab66ef7d7ee0d763b84ac316e50857a591b02760be35a namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.874702540Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4225]: time="2024-09-10T18:03:33.875848617Z" level=info msg="ignoring event" container=ec5fefa9ec4e4d3e503ab66ef7d7ee0d763b84ac316e50857a591b02760be35a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.888771789Z" level=info msg="shim disconnected" id=6e715701f9a8266eb62e2573ca940b61e284d0488b31fc1de779a3b6a25d7d0c namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.888810191Z" level=warning msg="cleaning up after shim disconnected" id=6e715701f9a8266eb62e2573ca940b61e284d0488b31fc1de779a3b6a25d7d0c namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.888819192Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4225]: time="2024-09-10T18:03:33.889035806Z" level=info msg="ignoring event" container=6e715701f9a8266eb62e2573ca940b61e284d0488b31fc1de779a3b6a25d7d0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:33 functional-879800 dockerd[4225]: time="2024-09-10T18:03:33.917781545Z" level=info msg="ignoring event" container=e3923829bb152a21bde50cf3a7e4abc655b6d8765a7c660bda0afaf4417b6e02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.918141469Z" level=info msg="shim disconnected" id=e3923829bb152a21bde50cf3a7e4abc655b6d8765a7c660bda0afaf4417b6e02 namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.918260377Z" level=warning msg="cleaning up after shim disconnected" id=e3923829bb152a21bde50cf3a7e4abc655b6d8765a7c660bda0afaf4417b6e02 namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.918272078Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.944717661Z" level=info msg="shim disconnected" id=a3e714586dcf6b36e8c5c2f012e6cbfa69737cb99e1aa1416f89384322466b2f namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.944835669Z" level=warning msg="cleaning up after shim disconnected" id=a3e714586dcf6b36e8c5c2f012e6cbfa69737cb99e1aa1416f89384322466b2f namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.944917975Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4225]: time="2024-09-10T18:03:33.949717498Z" level=info msg="ignoring event" container=a3e714586dcf6b36e8c5c2f012e6cbfa69737cb99e1aa1416f89384322466b2f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:33 functional-879800 dockerd[4225]: time="2024-09-10T18:03:33.977057742Z" level=info msg="ignoring event" container=ccc45d35827479fb2a2ef42723180398bed51819259e00c6f8727e2de3d54fbd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.977481670Z" level=info msg="shim disconnected" id=ccc45d35827479fb2a2ef42723180398bed51819259e00c6f8727e2de3d54fbd namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.978397232Z" level=warning msg="cleaning up after shim disconnected" id=ccc45d35827479fb2a2ef42723180398bed51819259e00c6f8727e2de3d54fbd namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.978477638Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.000898249Z" level=info msg="shim disconnected" id=07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4 namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.001083962Z" level=warning msg="cleaning up after shim disconnected" id=07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4 namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.001195469Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4225]: time="2024-09-10T18:03:34.002897384Z" level=info msg="ignoring event" container=07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:34 functional-879800 dockerd[4225]: time="2024-09-10T18:03:34.002999991Z" level=info msg="ignoring event" container=976499ca237d00a46e41412bfdd50385936ad33df3496ca98cbb01e91d14a429 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.003118699Z" level=info msg="shim disconnected" id=976499ca237d00a46e41412bfdd50385936ad33df3496ca98cbb01e91d14a429 namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.003189804Z" level=warning msg="cleaning up after shim disconnected" id=976499ca237d00a46e41412bfdd50385936ad33df3496ca98cbb01e91d14a429 namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.003227206Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.018068808Z" level=info msg="shim disconnected" id=5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4225]: time="2024-09-10T18:03:34.018993870Z" level=info msg="ignoring event" container=5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:34 functional-879800 dockerd[4225]: time="2024-09-10T18:03:34.019380696Z" level=info msg="ignoring event" container=d9b410da7bc72151b811c961c3c96f068294a33b4530933fb52e14096ba5a543 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.019830526Z" level=warning msg="cleaning up after shim disconnected" id=5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.019933033Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.025389001Z" level=info msg="shim disconnected" id=d9b410da7bc72151b811c961c3c96f068294a33b4530933fb52e14096ba5a543 namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.027827366Z" level=warning msg="cleaning up after shim disconnected" id=d9b410da7bc72151b811c961c3c96f068294a33b4530933fb52e14096ba5a543 namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.027941873Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.682948754Z" level=info msg="shim disconnected" id=6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.683012358Z" level=warning msg="cleaning up after shim disconnected" id=6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.683023959Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4225]: time="2024-09-10T18:03:34.683898418Z" level=info msg="ignoring event" container=6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.930124226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.930790771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.930965183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.935005555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.935089161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.935102662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.935277974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.936674368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.997074142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.997348360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.997478769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.999360296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.043805095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.044026110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.044152718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.044348231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.110987827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.111252645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.111371953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.111754779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.396595197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.396764509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.396792111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.396943421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.246438750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.247447118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.247714537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.247911850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.490659842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.490759649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.490809152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.491318787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:39 functional-879800 dockerd[4225]: time="2024-09-10T18:03:39.857401153Z" level=error msg="collecting stats for container /k8s_coredns_coredns-6f6b679f8f-266t8_kube-system_e2d0f1c5-7959-4f05-a592-c427855eb2da_1: invalid id: "
	Sep 10 18:03:39 functional-879800 dockerd[4225]: 2024/09/10 18:03:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Sep 10 18:03:43 functional-879800 dockerd[4225]: time="2024-09-10T18:03:43.336001164Z" level=info msg="ignoring event" container=37ad76e1635d5575d28b041349296f0b4bd7fa4abdadb6ab0882db24c65bc2db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:43 functional-879800 dockerd[4232]: time="2024-09-10T18:03:43.336882124Z" level=info msg="shim disconnected" id=37ad76e1635d5575d28b041349296f0b4bd7fa4abdadb6ab0882db24c65bc2db namespace=moby
	Sep 10 18:03:43 functional-879800 dockerd[4232]: time="2024-09-10T18:03:43.337599372Z" level=warning msg="cleaning up after shim disconnected" id=37ad76e1635d5575d28b041349296f0b4bd7fa4abdadb6ab0882db24c65bc2db namespace=moby
	Sep 10 18:03:43 functional-879800 dockerd[4232]: time="2024-09-10T18:03:43.338251216Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:59 functional-879800 dockerd[4225]: time="2024-09-10T18:03:59.091434642Z" level=info msg="ignoring event" container=b547c299d7dd011f02489d4a697d59014bae3219915fe919939c2298c3d0ff72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:59 functional-879800 dockerd[4232]: time="2024-09-10T18:03:59.091715562Z" level=info msg="shim disconnected" id=b547c299d7dd011f02489d4a697d59014bae3219915fe919939c2298c3d0ff72 namespace=moby
	Sep 10 18:03:59 functional-879800 dockerd[4232]: time="2024-09-10T18:03:59.091759665Z" level=warning msg="cleaning up after shim disconnected" id=b547c299d7dd011f02489d4a697d59014bae3219915fe919939c2298c3d0ff72 namespace=moby
	Sep 10 18:03:59 functional-879800 dockerd[4232]: time="2024-09-10T18:03:59.091768466Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.349817160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.349968471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.349989472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.350086679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.579090533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.579554466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.579693676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.579959195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:01 functional-879800 dockerd[4232]: time="2024-09-10T18:04:01.183864383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:01 functional-879800 dockerd[4232]: time="2024-09-10T18:04:01.184172805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:01 functional-879800 dockerd[4232]: time="2024-09-10T18:04:01.184257111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:01 functional-879800 dockerd[4232]: time="2024-09-10T18:04:01.184545932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.142383799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.142727123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.142745425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.142869734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.149939241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.150201360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.150333870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.150634691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.183608960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.183799773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.183823275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.184056992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.748853453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.749108971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.749218279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.749397892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:07:22 functional-879800 systemd[1]: Stopping Docker Application Container Engine...
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.352158112Z" level=info msg="Processing signal 'terminated'"
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.454472074Z" level=info msg="ignoring event" container=ca368efebfef8f375661bc1f7feab0ccfe57faf05aab591cc3730852336e3631 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.455346635Z" level=info msg="shim disconnected" id=ca368efebfef8f375661bc1f7feab0ccfe57faf05aab591cc3730852336e3631 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.455998181Z" level=warning msg="cleaning up after shim disconnected" id=ca368efebfef8f375661bc1f7feab0ccfe57faf05aab591cc3730852336e3631 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.456247098Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.489687239Z" level=info msg="ignoring event" container=e1315bf73debfa4232f692c3e0a0b4b3d6ebfeb080aaca3518075c735293b608 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.492412930Z" level=info msg="shim disconnected" id=e1315bf73debfa4232f692c3e0a0b4b3d6ebfeb080aaca3518075c735293b608 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.492515837Z" level=warning msg="cleaning up after shim disconnected" id=e1315bf73debfa4232f692c3e0a0b4b3d6ebfeb080aaca3518075c735293b608 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.492571841Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.518042523Z" level=info msg="ignoring event" container=eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.520561100Z" level=info msg="shim disconnected" id=eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.520614103Z" level=warning msg="cleaning up after shim disconnected" id=eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.520624804Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.540950327Z" level=info msg="ignoring event" container=b91bb2bb949612368ff63dd975a8cddc51be42f2e2fb07f3abcde14dd62fe3f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.541259949Z" level=info msg="shim disconnected" id=b91bb2bb949612368ff63dd975a8cddc51be42f2e2fb07f3abcde14dd62fe3f6 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.541350255Z" level=warning msg="cleaning up after shim disconnected" id=b91bb2bb949612368ff63dd975a8cddc51be42f2e2fb07f3abcde14dd62fe3f6 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.541390758Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.560853320Z" level=info msg="ignoring event" container=f2e8bfe81fcb7f65d7564b425bc1ec750ed7d5f1c976edbcc00f0a02c9798ccd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.561386757Z" level=info msg="shim disconnected" id=f2e8bfe81fcb7f65d7564b425bc1ec750ed7d5f1c976edbcc00f0a02c9798ccd namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.561431160Z" level=warning msg="cleaning up after shim disconnected" id=f2e8bfe81fcb7f65d7564b425bc1ec750ed7d5f1c976edbcc00f0a02c9798ccd namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.561440261Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.579624034Z" level=info msg="ignoring event" container=7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.580145670Z" level=info msg="shim disconnected" id=7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.580207775Z" level=warning msg="cleaning up after shim disconnected" id=7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.580223876Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.613844829Z" level=info msg="shim disconnected" id=f1a968c18d127ecd7f15f98201cf008290b23e40a654359ea1cf579b756ef34f namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.614009341Z" level=warning msg="cleaning up after shim disconnected" id=f1a968c18d127ecd7f15f98201cf008290b23e40a654359ea1cf579b756ef34f namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.614114948Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.615510946Z" level=info msg="ignoring event" container=f1a968c18d127ecd7f15f98201cf008290b23e40a654359ea1cf579b756ef34f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.621018031Z" level=info msg="shim disconnected" id=32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.621070135Z" level=warning msg="cleaning up after shim disconnected" id=32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.621080036Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.621344354Z" level=info msg="ignoring event" container=32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.636675327Z" level=info msg="ignoring event" container=1441db9085178360c9e1eab3a6f7848a379268e43886cd488c51c72a5ca5469e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.652287020Z" level=info msg="shim disconnected" id=1441db9085178360c9e1eab3a6f7848a379268e43886cd488c51c72a5ca5469e namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.652420729Z" level=warning msg="cleaning up after shim disconnected" id=1441db9085178360c9e1eab3a6f7848a379268e43886cd488c51c72a5ca5469e namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.652544838Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.661946796Z" level=info msg="ignoring event" container=8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.664658486Z" level=info msg="shim disconnected" id=8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.664874601Z" level=warning msg="cleaning up after shim disconnected" id=8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.664892402Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.695825967Z" level=info msg="ignoring event" container=70728a77068efe31fc3de9ed6238fa52a23c9b7aa136e603831e0473b2fbe635 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.696489314Z" level=info msg="shim disconnected" id=70728a77068efe31fc3de9ed6238fa52a23c9b7aa136e603831e0473b2fbe635 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.696534617Z" level=warning msg="cleaning up after shim disconnected" id=70728a77068efe31fc3de9ed6238fa52a23c9b7aa136e603831e0473b2fbe635 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.696545618Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.729004290Z" level=info msg="ignoring event" container=9979cd5778faa784311d35679eda1080be9e80af0ae34b2c1bc500498eb5c114 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.735083615Z" level=info msg="shim disconnected" id=9979cd5778faa784311d35679eda1080be9e80af0ae34b2c1bc500498eb5c114 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.736693128Z" level=warning msg="cleaning up after shim disconnected" id=9979cd5778faa784311d35679eda1080be9e80af0ae34b2c1bc500498eb5c114 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.736835438Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.807212964Z" level=warning msg="cleanup warnings time=\"2024-09-10T18:07:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 10 18:07:27 functional-879800 dockerd[4225]: time="2024-09-10T18:07:27.429406593Z" level=info msg="ignoring event" container=bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:27 functional-879800 dockerd[4232]: time="2024-09-10T18:07:27.429688013Z" level=info msg="shim disconnected" id=bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6 namespace=moby
	Sep 10 18:07:27 functional-879800 dockerd[4232]: time="2024-09-10T18:07:27.430046838Z" level=warning msg="cleaning up after shim disconnected" id=bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6 namespace=moby
	Sep 10 18:07:27 functional-879800 dockerd[4232]: time="2024-09-10T18:07:27.430060039Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:32 functional-879800 dockerd[4225]: time="2024-09-10T18:07:32.442204332Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035
	Sep 10 18:07:32 functional-879800 dockerd[4225]: time="2024-09-10T18:07:32.478648728Z" level=info msg="ignoring event" container=95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:32 functional-879800 dockerd[4232]: time="2024-09-10T18:07:32.479459150Z" level=info msg="shim disconnected" id=95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035 namespace=moby
	Sep 10 18:07:32 functional-879800 dockerd[4232]: time="2024-09-10T18:07:32.479530152Z" level=warning msg="cleaning up after shim disconnected" id=95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035 namespace=moby
	Sep 10 18:07:32 functional-879800 dockerd[4232]: time="2024-09-10T18:07:32.479542052Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:32 functional-879800 dockerd[4225]: time="2024-09-10T18:07:32.543970812Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 10 18:07:32 functional-879800 dockerd[4225]: time="2024-09-10T18:07:32.544248320Z" level=info msg="Daemon shutdown complete"
	Sep 10 18:07:32 functional-879800 dockerd[4225]: time="2024-09-10T18:07:32.544421724Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 10 18:07:32 functional-879800 dockerd[4225]: time="2024-09-10T18:07:32.544458625Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 10 18:07:33 functional-879800 systemd[1]: docker.service: Deactivated successfully.
	Sep 10 18:07:33 functional-879800 systemd[1]: Stopped Docker Application Container Engine.
	Sep 10 18:07:33 functional-879800 systemd[1]: docker.service: Consumed 9.274s CPU time.
	Sep 10 18:07:33 functional-879800 systemd[1]: Starting Docker Application Container Engine...
	Sep 10 18:07:33 functional-879800 dockerd[8778]: time="2024-09-10T18:07:33.598343792Z" level=info msg="Starting up"
	Sep 10 18:08:33 functional-879800 dockerd[8778]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 10 18:08:33 functional-879800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 10 18:08:33 functional-879800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 10 18:08:33 functional-879800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-879800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 90
functional_test.go:761: restart took 2m18.5935555s for "functional-879800" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-879800 -n functional-879800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-879800 -n functional-879800: exit status 2 (10.1018591s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 logs -n 25
E0910 18:09:10.437570    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:10:33.901067    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 logs -n 25: (1m50.3587452s)
helpers_test.go:252: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| unpause | nospam-885900 --log_dir                                                  | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:57 UTC | 10 Sep 24 17:57 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-885900 --log_dir                                                  | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:57 UTC | 10 Sep 24 17:57 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-885900 --log_dir                                                  | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:57 UTC | 10 Sep 24 17:57 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-885900 --log_dir                                                  | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:57 UTC | 10 Sep 24 17:57 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-885900 --log_dir                                                  | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:57 UTC | 10 Sep 24 17:58 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-885900 --log_dir                                                  | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:58 UTC | 10 Sep 24 17:58 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| delete  | -p nospam-885900                                                         | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:58 UTC | 10 Sep 24 17:58 UTC |
	| start   | -p functional-879800                                                     | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:58 UTC | 10 Sep 24 18:02 UTC |
	|         | --memory=4000                                                            |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |         |                     |                     |
	| start   | -p functional-879800                                                     | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:02 UTC | 10 Sep 24 18:04 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |                   |         |                     |                     |
	| cache   | functional-879800 cache add                                              | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:04 UTC | 10 Sep 24 18:04 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | functional-879800 cache add                                              | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:04 UTC | 10 Sep 24 18:04 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | functional-879800 cache add                                              | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:04 UTC | 10 Sep 24 18:04 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-879800 cache add                                              | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:04 UTC | 10 Sep 24 18:05 UTC |
	|         | minikube-local-cache-test:functional-879800                              |                   |                   |         |                     |                     |
	| cache   | functional-879800 cache delete                                           | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | minikube-local-cache-test:functional-879800                              |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | list                                                                     | minikube          | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	| ssh     | functional-879800 ssh sudo                                               | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-879800                                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh     | functional-879800 ssh                                                    | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-879800 cache reload                                           | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	| ssh     | functional-879800 ssh                                                    | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-879800 kubectl --                                             | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | --context functional-879800                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-879800                                                     | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:06 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:06:15
	Running on machine: minikube5
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:06:15.155104    2616 out.go:345] Setting OutFile to fd 1004 ...
	I0910 18:06:15.204899    2616 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:06:15.204899    2616 out.go:358] Setting ErrFile to fd 824...
	I0910 18:06:15.204899    2616 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:06:15.221913    2616 out.go:352] Setting JSON to false
	I0910 18:06:15.223910    2616 start.go:129] hostinfo: {"hostname":"minikube5","uptime":102838,"bootTime":1725888736,"procs":180,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4842 Build 19045.4842","kernelVersion":"10.0.19045.4842 Build 19045.4842","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0910 18:06:15.223910    2616 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 18:06:15.229175    2616 out.go:177] * [functional-879800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	I0910 18:06:15.233026    2616 notify.go:220] Checking for updates...
	I0910 18:06:15.238765    2616 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:06:15.240640    2616 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:06:15.243256    2616 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0910 18:06:15.245190    2616 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:06:15.252260    2616 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:06:15.255131    2616 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:06:15.255321    2616 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:06:19.892658    2616 out.go:177] * Using the hyperv driver based on existing profile
	I0910 18:06:19.896282    2616 start.go:297] selected driver: hyperv
	I0910 18:06:19.896282    2616 start.go:901] validating driver "hyperv" against &{Name:functional-879800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:functional-879800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.92 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:06:19.896282    2616 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:06:19.937003    2616 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:06:19.937003    2616 cni.go:84] Creating CNI manager for ""
	I0910 18:06:19.937003    2616 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 18:06:19.937535    2616 start.go:340] cluster config:
	{Name:functional-879800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-879800 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.92 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:06:19.937614    2616 iso.go:125] acquiring lock: {Name:mkb663b978f90cccd76348bddb3ea027f6ac4252 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:06:19.940819    2616 out.go:177] * Starting "functional-879800" primary control-plane node in "functional-879800" cluster
	I0910 18:06:19.943040    2616 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 18:06:19.943040    2616 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0910 18:06:19.943040    2616 cache.go:56] Caching tarball of preloaded images
	I0910 18:06:19.943040    2616 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0910 18:06:19.943040    2616 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 18:06:19.943040    2616 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-879800\config.json ...
	I0910 18:06:19.945109    2616 start.go:360] acquireMachinesLock for functional-879800: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:06:19.945109    2616 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-879800"
	I0910 18:06:19.945109    2616 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:06:19.945109    2616 fix.go:54] fixHost starting: 
	I0910 18:06:19.946114    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:06:22.380354    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:06:22.380354    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:22.380354    2616 fix.go:112] recreateIfNeeded on functional-879800: state=Running err=<nil>
	W0910 18:06:22.380354    2616 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:06:22.384900    2616 out.go:177] * Updating the running hyperv "functional-879800" VM ...
	I0910 18:06:22.388001    2616 machine.go:93] provisionDockerMachine start ...
	I0910 18:06:22.388001    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:06:24.287582    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:06:24.287582    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:24.288143    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:06:26.556392    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:06:26.556392    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:26.560310    2616 main.go:141] libmachine: Using SSH client type: native
	I0910 18:06:26.560310    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:06:26.560310    2616 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:06:26.690333    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-879800
	
	I0910 18:06:26.690333    2616 buildroot.go:166] provisioning hostname "functional-879800"
	I0910 18:06:26.690333    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:06:28.568756    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:06:28.568756    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:28.569034    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:06:30.830536    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:06:30.830536    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:30.836919    2616 main.go:141] libmachine: Using SSH client type: native
	I0910 18:06:30.836919    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:06:30.836919    2616 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-879800 && echo "functional-879800" | sudo tee /etc/hostname
	I0910 18:06:30.993094    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-879800
	
	I0910 18:06:30.993094    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:06:32.829711    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:06:32.829711    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:32.829795    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:06:34.974069    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:06:34.974069    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:34.982625    2616 main.go:141] libmachine: Using SSH client type: native
	I0910 18:06:34.982684    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:06:34.982684    2616 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-879800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-879800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-879800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:06:35.118532    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:06:35.118532    2616 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0910 18:06:35.118532    2616 buildroot.go:174] setting up certificates
	I0910 18:06:35.118532    2616 provision.go:84] configureAuth start
	I0910 18:06:35.118532    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:06:36.893851    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:06:36.893851    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:36.903186    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:06:39.020715    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:06:39.030449    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:39.030449    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:06:40.803926    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:06:40.803926    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:40.803926    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:06:42.966621    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:06:42.976571    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:42.976623    2616 provision.go:143] copyHostCerts
	I0910 18:06:42.976623    2616 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0910 18:06:42.976623    2616 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0910 18:06:42.977198    2616 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0910 18:06:42.977780    2616 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0910 18:06:42.977780    2616 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0910 18:06:42.978374    2616 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0910 18:06:42.978947    2616 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0910 18:06:42.978947    2616 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0910 18:06:42.979478    2616 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0910 18:06:42.980203    2616 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-879800 san=[127.0.0.1 172.31.208.92 functional-879800 localhost minikube]
	I0910 18:06:43.090179    2616 provision.go:177] copyRemoteCerts
	I0910 18:06:43.105271    2616 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:06:43.105271    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:06:44.854813    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:06:44.854813    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:44.854813    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:06:47.047536    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:06:47.063030    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:47.063113    2616 sshutil.go:53] new ssh client: &{IP:172.31.208.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-879800\id_rsa Username:docker}
	I0910 18:06:47.167499    2616 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.0618484s)
	I0910 18:06:47.167499    2616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 18:06:47.207475    2616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0910 18:06:47.247552    2616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:06:47.290551    2616 provision.go:87] duration metric: took 12.1711956s to configureAuth
	I0910 18:06:47.290551    2616 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:06:47.291138    2616 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:06:47.291218    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:06:49.034338    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:06:49.034338    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:49.034338    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:06:51.225619    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:06:51.225619    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:51.239392    2616 main.go:141] libmachine: Using SSH client type: native
	I0910 18:06:51.239392    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:06:51.239392    2616 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 18:06:51.371331    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 18:06:51.371331    2616 buildroot.go:70] root file system type: tmpfs
	I0910 18:06:51.371859    2616 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 18:06:51.371944    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:06:53.156183    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:06:53.156183    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:53.156183    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:06:55.304702    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:06:55.304702    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:55.308087    2616 main.go:141] libmachine: Using SSH client type: native
	I0910 18:06:55.308655    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:06:55.308655    2616 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 18:06:55.462192    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 18:06:55.462192    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:06:57.243196    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:06:57.251552    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:57.251552    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:06:59.377939    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:06:59.377939    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:59.390511    2616 main.go:141] libmachine: Using SSH client type: native
	I0910 18:06:59.390938    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:06:59.390938    2616 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 18:06:59.532158    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:06:59.532215    2616 machine.go:96] duration metric: took 37.1416436s to provisionDockerMachine
	I0910 18:06:59.532245    2616 start.go:293] postStartSetup for "functional-879800" (driver="hyperv")
	I0910 18:06:59.532245    2616 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:06:59.541735    2616 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:06:59.541895    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:07:01.353491    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:07:01.353491    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:01.363153    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:07:03.486271    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:07:03.486271    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:03.495332    2616 sshutil.go:53] new ssh client: &{IP:172.31.208.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-879800\id_rsa Username:docker}
	I0910 18:07:03.603791    2616 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.0617819s)
	I0910 18:07:03.613220    2616 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:07:03.619386    2616 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:07:03.619449    2616 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0910 18:07:03.619946    2616 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0910 18:07:03.621156    2616 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> 47242.pem in /etc/ssl/certs
	I0910 18:07:03.621752    2616 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\4724\hosts -> hosts in /etc/test/nested/copy/4724
	I0910 18:07:03.628177    2616 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4724
	I0910 18:07:03.645956    2616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /etc/ssl/certs/47242.pem (1708 bytes)
	I0910 18:07:03.688834    2616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\4724\hosts --> /etc/test/nested/copy/4724/hosts (40 bytes)
	I0910 18:07:03.730721    2616 start.go:296] duration metric: took 4.1981923s for postStartSetup
	I0910 18:07:03.730721    2616 fix.go:56] duration metric: took 43.7826508s for fixHost
	I0910 18:07:03.730721    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:07:05.520241    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:07:05.529608    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:05.529608    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:07:07.742608    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:07:07.742608    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:07.761535    2616 main.go:141] libmachine: Using SSH client type: native
	I0910 18:07:07.761535    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:07:07.761535    2616 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:07:07.892513    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725991628.118143582
	
	I0910 18:07:07.892513    2616 fix.go:216] guest clock: 1725991628.118143582
	I0910 18:07:07.892513    2616 fix.go:229] Guest: 2024-09-10 18:07:08.118143582 +0000 UTC Remote: 2024-09-10 18:07:03.7307216 +0000 UTC m=+48.637658501 (delta=4.387421982s)
	I0910 18:07:07.892594    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:07:09.699959    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:07:09.709896    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:09.709896    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:07:11.876478    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:07:11.876478    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:11.889427    2616 main.go:141] libmachine: Using SSH client type: native
	I0910 18:07:11.889928    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:07:11.889928    2616 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1725991627
	I0910 18:07:12.030021    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Sep 10 18:07:07 UTC 2024
	
	I0910 18:07:12.030054    2616 fix.go:236] clock set: Tue Sep 10 18:07:07 UTC 2024
	 (err=<nil>)
	I0910 18:07:12.030085    2616 start.go:83] releasing machines lock for "functional-879800", held for 52.0814544s
	I0910 18:07:12.030235    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:07:13.852840    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:07:13.852840    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:13.862658    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:07:16.085578    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:07:16.085578    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:16.088184    2616 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0910 18:07:16.088184    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:07:16.097387    2616 ssh_runner.go:195] Run: cat /version.json
	I0910 18:07:16.098952    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:07:17.996190    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:07:17.996190    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:17.996272    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:07:17.996272    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:07:17.996272    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:17.996797    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:07:20.269206    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:07:20.269206    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:20.279230    2616 sshutil.go:53] new ssh client: &{IP:172.31.208.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-879800\id_rsa Username:docker}
	I0910 18:07:20.300225    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:07:20.300225    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:20.300820    2616 sshutil.go:53] new ssh client: &{IP:172.31.208.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-879800\id_rsa Username:docker}
	I0910 18:07:20.369072    2616 ssh_runner.go:235] Completed: cat /version.json: (4.2713197s)
	I0910 18:07:20.379719    2616 ssh_runner.go:195] Run: systemctl --version
	I0910 18:07:20.384502    2616 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.2958749s)
	W0910 18:07:20.384566    2616 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0910 18:07:20.399258    2616 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:07:20.409641    2616 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:07:20.419072    2616 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:07:20.435320    2616 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0910 18:07:20.435320    2616 start.go:495] detecting cgroup driver to use...
	I0910 18:07:20.435320    2616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:07:20.475600    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0910 18:07:20.500662    2616 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0910 18:07:20.500662    2616 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0910 18:07:20.504135    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 18:07:20.524549    2616 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 18:07:20.535338    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 18:07:20.566752    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 18:07:20.595989    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 18:07:20.623069    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 18:07:20.650316    2616 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:07:20.678358    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 18:07:20.707021    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 18:07:20.740126    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 18:07:20.768558    2616 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:07:20.796818    2616 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:07:20.824008    2616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:07:21.069564    2616 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 18:07:21.093217    2616 start.go:495] detecting cgroup driver to use...
	I0910 18:07:21.107089    2616 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 18:07:21.148922    2616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:07:21.186782    2616 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:07:21.238467    2616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:07:21.269109    2616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 18:07:21.290198    2616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:07:21.331151    2616 ssh_runner.go:195] Run: which cri-dockerd
	I0910 18:07:21.345206    2616 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 18:07:21.359817    2616 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 18:07:21.402047    2616 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 18:07:21.624751    2616 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 18:07:21.835049    2616 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 18:07:21.835280    2616 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 18:07:21.876048    2616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:07:22.104702    2616 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 18:08:33.411013    2616 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3015062s)
	I0910 18:08:33.421599    2616 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0910 18:08:33.490586    2616 out.go:201] 
	W0910 18:08:33.500999    2616 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 10 18:00:18 functional-879800 systemd[1]: Starting Docker Application Container Engine...
	Sep 10 18:00:18 functional-879800 dockerd[654]: time="2024-09-10T18:00:18.048484724Z" level=info msg="Starting up"
	Sep 10 18:00:18 functional-879800 dockerd[654]: time="2024-09-10T18:00:18.049311817Z" level=info msg="containerd not running, starting managed containerd"
	Sep 10 18:00:18 functional-879800 dockerd[654]: time="2024-09-10T18:00:18.050378195Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=660
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.085669997Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113152633Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113198249Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113255169Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113273076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113351804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113365408Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113555476Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113593289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113608394Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113620399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113738040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113952016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.117262689Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.117369927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.117542988Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.117638322Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.117800380Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.117961036Z" level=info msg="metadata content store policy set" policy=shared
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.143566607Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.143812694Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.143852008Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.143870115Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.143885420Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144004162Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144431814Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144578366Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144731920Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144769633Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144785139Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144800544Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144817951Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144833256Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144852263Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144874671Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144890676Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144903781Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144926489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144952798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144968904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144985010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144998314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145012219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145026424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145041330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145055335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145071540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145083945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145098250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145125159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145150368Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145174477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145188082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145200886Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145248603Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145268610Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145281715Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145295520Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145308424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145321929Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145343637Z" level=info msg="NRI interface is disabled by configuration."
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145564815Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145769388Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145831810Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145867622Z" level=info msg="containerd successfully booted in 0.061994s"
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.121888725Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.154996562Z" level=info msg="Loading containers: start."
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.319648547Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.537934908Z" level=info msg="Loading containers: done."
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.562684996Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.563059323Z" level=info msg="Daemon has completed initialization"
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.675990996Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 10 18:00:19 functional-879800 systemd[1]: Started Docker Application Container Engine.
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.677273131Z" level=info msg="API listen on [::]:2376"
	Sep 10 18:00:47 functional-879800 systemd[1]: Stopping Docker Application Container Engine...
	Sep 10 18:00:47 functional-879800 dockerd[654]: time="2024-09-10T18:00:47.375581804Z" level=info msg="Processing signal 'terminated'"
	Sep 10 18:00:47 functional-879800 dockerd[654]: time="2024-09-10T18:00:47.377311327Z" level=info msg="Daemon shutdown complete"
	Sep 10 18:00:47 functional-879800 dockerd[654]: time="2024-09-10T18:00:47.377382432Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 10 18:00:47 functional-879800 dockerd[654]: time="2024-09-10T18:00:47.377399033Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 10 18:00:47 functional-879800 dockerd[654]: time="2024-09-10T18:00:47.377426335Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Sep 10 18:00:48 functional-879800 systemd[1]: docker.service: Deactivated successfully.
	Sep 10 18:00:48 functional-879800 systemd[1]: Stopped Docker Application Container Engine.
	Sep 10 18:00:48 functional-879800 systemd[1]: Starting Docker Application Container Engine...
	Sep 10 18:00:48 functional-879800 dockerd[1071]: time="2024-09-10T18:00:48.429522254Z" level=info msg="Starting up"
	Sep 10 18:00:48 functional-879800 dockerd[1071]: time="2024-09-10T18:00:48.431008259Z" level=info msg="containerd not running, starting managed containerd"
	Sep 10 18:00:48 functional-879800 dockerd[1071]: time="2024-09-10T18:00:48.432192643Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1077
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.459542277Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.483918202Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.483975206Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484022209Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484041010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484077013Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484097014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484291928Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484426638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484453740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484468941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484501543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484661254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.487483054Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.487608563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.487911684Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.487954587Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.487995390Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.488021392Z" level=info msg="metadata content store policy set" policy=shared
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.488424620Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.488479324Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.488499926Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.488520527Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.488550029Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.488615634Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.489261880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.489448693Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.489569201Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.489694710Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.489718112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.489736913Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490086038Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490114540Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490146942Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490166844Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490183545Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490199646Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490225648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490251750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490274451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490293053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490318254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490335756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490352257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490374458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490393260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490413461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490429562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490445963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490463665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490485466Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490512768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490632877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490655178Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490865893Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491038605Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491056507Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491073508Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491087909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491113411Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491129912Z" level=info msg="NRI interface is disabled by configuration."
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491451335Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491783358Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.492068478Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.492180986Z" level=info msg="containerd successfully booted in 0.033661s"
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.478401846Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.503818243Z" level=info msg="Loading containers: start."
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.627367383Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.742364917Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.836564080Z" level=info msg="Loading containers: done."
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.856624799Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.856752008Z" level=info msg="Daemon has completed initialization"
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.901724089Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.901914602Z" level=info msg="API listen on [::]:2376"
	Sep 10 18:00:49 functional-879800 systemd[1]: Started Docker Application Container Engine.
	Sep 10 18:00:58 functional-879800 systemd[1]: Stopping Docker Application Container Engine...
	Sep 10 18:00:58 functional-879800 dockerd[1071]: time="2024-09-10T18:00:58.351344467Z" level=info msg="Processing signal 'terminated'"
	Sep 10 18:00:58 functional-879800 dockerd[1071]: time="2024-09-10T18:00:58.352684961Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 10 18:00:58 functional-879800 dockerd[1071]: time="2024-09-10T18:00:58.352996783Z" level=info msg="Daemon shutdown complete"
	Sep 10 18:00:58 functional-879800 dockerd[1071]: time="2024-09-10T18:00:58.353065188Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 10 18:00:58 functional-879800 dockerd[1071]: time="2024-09-10T18:00:58.353083690Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 10 18:00:59 functional-879800 systemd[1]: docker.service: Deactivated successfully.
	Sep 10 18:00:59 functional-879800 systemd[1]: Stopped Docker Application Container Engine.
	Sep 10 18:00:59 functional-879800 systemd[1]: Starting Docker Application Container Engine...
	Sep 10 18:00:59 functional-879800 dockerd[1425]: time="2024-09-10T18:00:59.401497648Z" level=info msg="Starting up"
	Sep 10 18:00:59 functional-879800 dockerd[1425]: time="2024-09-10T18:00:59.402190497Z" level=info msg="containerd not running, starting managed containerd"
	Sep 10 18:00:59 functional-879800 dockerd[1425]: time="2024-09-10T18:00:59.403078760Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1431
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.428368049Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.449998479Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450096286Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450129688Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450142389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450164791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450174991Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450305701Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450395907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450413008Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450422409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450452311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450544818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.453475725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.453561631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.453687940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.453768346Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.453918756Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454018963Z" level=info msg="metadata content store policy set" policy=shared
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454243479Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454324385Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454344286Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454358687Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454371288Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454409991Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454713813Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454920227Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454938528Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454951729Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454965530Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454978831Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454991632Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455008133Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455022934Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455040736Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455054637Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455066637Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455086439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455104040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455117041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455130342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455142643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455155844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455168445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455181646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455194647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455209148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455220848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455233149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455246050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455266652Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455286253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455299754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455311255Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455351858Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455369459Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455382260Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455395761Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455406762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455419762Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455430363Z" level=info msg="NRI interface is disabled by configuration."
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455625977Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455965501Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.456012404Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.456047007Z" level=info msg="containerd successfully booted in 0.028476s"
	Sep 10 18:01:00 functional-879800 dockerd[1425]: time="2024-09-10T18:01:00.554280190Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.339558604Z" level=info msg="Loading containers: start."
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.470992401Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.589342572Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.687264299Z" level=info msg="Loading containers: done."
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.712100256Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.712238065Z" level=info msg="Daemon has completed initialization"
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.753412878Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 10 18:01:03 functional-879800 systemd[1]: Started Docker Application Container Engine.
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.757907896Z" level=info msg="API listen on [::]:2376"
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.843562023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.843615227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.843627628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.843743036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.881096164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.881214473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.881246876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.881574000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.918963031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.919037037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.919054638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.923719891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.926194878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.926268884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.926345190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.926692516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.293309781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.293378786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.293422989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.293516496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.380781776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.380924186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.381002392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.381188906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.424221851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.424448368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.424528474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.424720389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.427031863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.427122170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.427138071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.427705214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.596259546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.596394956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.596410457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.596594870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.977439827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.977546135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.977564036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.977701946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.012423876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.012517682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.012536184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.012641591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.036026293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.036651138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.036822651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.037129473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.681454980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.681601191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.681638693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.681841508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.716309594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.716465805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.716490007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.716597815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.277424206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.277585217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.277609619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.277749028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.549402567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.549468572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.549487373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.549601481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:35 functional-879800 dockerd[1425]: time="2024-09-10T18:01:35.869111260Z" level=info msg="ignoring event" container=f8fdd3b265468b67be1adcf36a8550e3434d52705793e41c0a4aa20a6347af10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:01:35 functional-879800 dockerd[1431]: time="2024-09-10T18:01:35.870287741Z" level=info msg="shim disconnected" id=f8fdd3b265468b67be1adcf36a8550e3434d52705793e41c0a4aa20a6347af10 namespace=moby
	Sep 10 18:01:35 functional-879800 dockerd[1431]: time="2024-09-10T18:01:35.870471854Z" level=warning msg="cleaning up after shim disconnected" id=f8fdd3b265468b67be1adcf36a8550e3434d52705793e41c0a4aa20a6347af10 namespace=moby
	Sep 10 18:01:35 functional-879800 dockerd[1431]: time="2024-09-10T18:01:35.870482255Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:01:36 functional-879800 dockerd[1431]: time="2024-09-10T18:01:36.047551699Z" level=info msg="shim disconnected" id=0fbb75cf9db50f56284994377744954858ba1a907974bd24a03fe204d61c7a95 namespace=moby
	Sep 10 18:01:36 functional-879800 dockerd[1425]: time="2024-09-10T18:01:36.047769414Z" level=info msg="ignoring event" container=0fbb75cf9db50f56284994377744954858ba1a907974bd24a03fe204d61c7a95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:01:36 functional-879800 dockerd[1431]: time="2024-09-10T18:01:36.047937726Z" level=warning msg="cleaning up after shim disconnected" id=0fbb75cf9db50f56284994377744954858ba1a907974bd24a03fe204d61c7a95 namespace=moby
	Sep 10 18:01:36 functional-879800 dockerd[1431]: time="2024-09-10T18:01:36.047999430Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:16 functional-879800 systemd[1]: Stopping Docker Application Container Engine...
	Sep 10 18:03:16 functional-879800 dockerd[1425]: time="2024-09-10T18:03:16.937916216Z" level=info msg="Processing signal 'terminated'"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.104463094Z" level=info msg="shim disconnected" id=b0e69451632d78db5e981243b5f1d242d579a9cb2b3e395174eb8c40be18ef68 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.105821089Z" level=info msg="ignoring event" container=b0e69451632d78db5e981243b5f1d242d579a9cb2b3e395174eb8c40be18ef68 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.106062106Z" level=warning msg="cleaning up after shim disconnected" id=b0e69451632d78db5e981243b5f1d242d579a9cb2b3e395174eb8c40be18ef68 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.106172314Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.155843996Z" level=info msg="ignoring event" container=7acd3a1687c63b8c19da613fcc79f3ba70ee7bdccf6bfa44b405956d9a17e01a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.159695566Z" level=info msg="shim disconnected" id=7acd3a1687c63b8c19da613fcc79f3ba70ee7bdccf6bfa44b405956d9a17e01a namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.159819375Z" level=warning msg="cleaning up after shim disconnected" id=7acd3a1687c63b8c19da613fcc79f3ba70ee7bdccf6bfa44b405956d9a17e01a namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.159869979Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.162078434Z" level=info msg="ignoring event" container=19fb530e5214fd7fb18cc260293e9cea4c56f4ca100b000374ac5e148404a252 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.166101616Z" level=info msg="shim disconnected" id=19fb530e5214fd7fb18cc260293e9cea4c56f4ca100b000374ac5e148404a252 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.166157220Z" level=warning msg="cleaning up after shim disconnected" id=19fb530e5214fd7fb18cc260293e9cea4c56f4ca100b000374ac5e148404a252 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.166167620Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.176262428Z" level=info msg="ignoring event" container=c0055974541d4cbd1c38c909cedf16c2ed8f6961f77502ddc6a666599fa123cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.185350765Z" level=info msg="shim disconnected" id=c0055974541d4cbd1c38c909cedf16c2ed8f6961f77502ddc6a666599fa123cc namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.185385668Z" level=warning msg="cleaning up after shim disconnected" id=c0055974541d4cbd1c38c909cedf16c2ed8f6961f77502ddc6a666599fa123cc namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.185396969Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.186332834Z" level=info msg="ignoring event" container=443d0b1bfd8f94fe9a5f42d5018ef70e241d0c811e2829414b69c6a7a70ff38e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.188048755Z" level=info msg="shim disconnected" id=443d0b1bfd8f94fe9a5f42d5018ef70e241d0c811e2829414b69c6a7a70ff38e namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.188830609Z" level=warning msg="cleaning up after shim disconnected" id=443d0b1bfd8f94fe9a5f42d5018ef70e241d0c811e2829414b69c6a7a70ff38e namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.188906315Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.200281212Z" level=info msg="ignoring event" container=f4c45e58e5ad749c5b6ac4af995a6c64ffc1454bdb9e450711e7d05383ac6e88 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.200529330Z" level=info msg="shim disconnected" id=f4c45e58e5ad749c5b6ac4af995a6c64ffc1454bdb9e450711e7d05383ac6e88 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.200784748Z" level=warning msg="cleaning up after shim disconnected" id=f4c45e58e5ad749c5b6ac4af995a6c64ffc1454bdb9e450711e7d05383ac6e88 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.200839551Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.217470818Z" level=info msg="shim disconnected" id=2fded56066ba805dd117038086ed87c1c07f3b8f07964442e237b58e1f5c1af9 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.217710534Z" level=warning msg="cleaning up after shim disconnected" id=2fded56066ba805dd117038086ed87c1c07f3b8f07964442e237b58e1f5c1af9 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.217740836Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.219247142Z" level=info msg="ignoring event" container=2fded56066ba805dd117038086ed87c1c07f3b8f07964442e237b58e1f5c1af9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.225671793Z" level=info msg="shim disconnected" id=afe36a9485ea5c3fb74437c46ad24d9fc5481951b877b6ee8d2994f6a2ddc7d8 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.225916110Z" level=warning msg="cleaning up after shim disconnected" id=afe36a9485ea5c3fb74437c46ad24d9fc5481951b877b6ee8d2994f6a2ddc7d8 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.226028518Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.247117296Z" level=info msg="ignoring event" container=afe36a9485ea5c3fb74437c46ad24d9fc5481951b877b6ee8d2994f6a2ddc7d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.247437619Z" level=info msg="ignoring event" container=d1a139975a680fdec8a1d64f0b916ac536e20a4dc7a0300b5fa144c176679c27 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.251142178Z" level=info msg="shim disconnected" id=d1a139975a680fdec8a1d64f0b916ac536e20a4dc7a0300b5fa144c176679c27 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.251512604Z" level=warning msg="cleaning up after shim disconnected" id=d1a139975a680fdec8a1d64f0b916ac536e20a4dc7a0300b5fa144c176679c27 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.251636113Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.276859782Z" level=info msg="shim disconnected" id=0e83830f0f0ea0b4df02286ef66171d35a7aba6988e545ec7dbdd8a0e7f72148 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.277179904Z" level=warning msg="cleaning up after shim disconnected" id=0e83830f0f0ea0b4df02286ef66171d35a7aba6988e545ec7dbdd8a0e7f72148 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.277504227Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.283016913Z" level=info msg="shim disconnected" id=c72de8d88c7aba63faf855d522ae12d6f047e03d0461df47e86e59ce16212e7b namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.283534650Z" level=warning msg="cleaning up after shim disconnected" id=c72de8d88c7aba63faf855d522ae12d6f047e03d0461df47e86e59ce16212e7b namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.283722663Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.286985492Z" level=info msg="shim disconnected" id=9040449f00775036426d6df47306914e57c2bc2fba000ed833d7019bf407398c namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.287105900Z" level=warning msg="cleaning up after shim disconnected" id=9040449f00775036426d6df47306914e57c2bc2fba000ed833d7019bf407398c namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.287116101Z" level=info msg="ignoring event" container=0e83830f0f0ea0b4df02286ef66171d35a7aba6988e545ec7dbdd8a0e7f72148 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.287148703Z" level=info msg="ignoring event" container=c72de8d88c7aba63faf855d522ae12d6f047e03d0461df47e86e59ce16212e7b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.287236709Z" level=info msg="ignoring event" container=9040449f00775036426d6df47306914e57c2bc2fba000ed833d7019bf407398c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.287482627Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.379983012Z" level=warning msg="cleanup warnings time=\"2024-09-10T18:03:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 10 18:03:22 functional-879800 dockerd[1431]: time="2024-09-10T18:03:22.099065101Z" level=info msg="shim disconnected" id=0fe7875f1ea8d61fc0c9e87d00b4d91469ae530abf0233a722959f81e0194889 namespace=moby
	Sep 10 18:03:22 functional-879800 dockerd[1425]: time="2024-09-10T18:03:22.099724747Z" level=info msg="ignoring event" container=0fe7875f1ea8d61fc0c9e87d00b4d91469ae530abf0233a722959f81e0194889 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:22 functional-879800 dockerd[1431]: time="2024-09-10T18:03:22.100757719Z" level=warning msg="cleaning up after shim disconnected" id=0fe7875f1ea8d61fc0c9e87d00b4d91469ae530abf0233a722959f81e0194889 namespace=moby
	Sep 10 18:03:22 functional-879800 dockerd[1431]: time="2024-09-10T18:03:22.101024838Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:27 functional-879800 dockerd[1425]: time="2024-09-10T18:03:27.012561677Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=935ff5cd94293207d9b8ff0882b7a61b6fd9dab977003e5d0bbe330c64fb0572
	Sep 10 18:03:27 functional-879800 dockerd[1425]: time="2024-09-10T18:03:27.051920467Z" level=info msg="ignoring event" container=935ff5cd94293207d9b8ff0882b7a61b6fd9dab977003e5d0bbe330c64fb0572 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:27 functional-879800 dockerd[1431]: time="2024-09-10T18:03:27.052656729Z" level=info msg="shim disconnected" id=935ff5cd94293207d9b8ff0882b7a61b6fd9dab977003e5d0bbe330c64fb0572 namespace=moby
	Sep 10 18:03:27 functional-879800 dockerd[1431]: time="2024-09-10T18:03:27.052756037Z" level=warning msg="cleaning up after shim disconnected" id=935ff5cd94293207d9b8ff0882b7a61b6fd9dab977003e5d0bbe330c64fb0572 namespace=moby
	Sep 10 18:03:27 functional-879800 dockerd[1431]: time="2024-09-10T18:03:27.052802441Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:27 functional-879800 dockerd[1425]: time="2024-09-10T18:03:27.115701599Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 10 18:03:27 functional-879800 dockerd[1425]: time="2024-09-10T18:03:27.116521267Z" level=info msg="Daemon shutdown complete"
	Sep 10 18:03:27 functional-879800 dockerd[1425]: time="2024-09-10T18:03:27.116719684Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 10 18:03:27 functional-879800 dockerd[1425]: time="2024-09-10T18:03:27.116799091Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 10 18:03:28 functional-879800 systemd[1]: docker.service: Deactivated successfully.
	Sep 10 18:03:28 functional-879800 systemd[1]: Stopped Docker Application Container Engine.
	Sep 10 18:03:28 functional-879800 systemd[1]: docker.service: Consumed 4.993s CPU time.
	Sep 10 18:03:28 functional-879800 systemd[1]: Starting Docker Application Container Engine...
	Sep 10 18:03:28 functional-879800 dockerd[4225]: time="2024-09-10T18:03:28.177678725Z" level=info msg="Starting up"
	Sep 10 18:03:28 functional-879800 dockerd[4225]: time="2024-09-10T18:03:28.178393684Z" level=info msg="containerd not running, starting managed containerd"
	Sep 10 18:03:28 functional-879800 dockerd[4225]: time="2024-09-10T18:03:28.179695692Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=4232
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.207359481Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237220152Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237314660Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237345862Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237360364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237381065Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237390766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237533778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237671289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237687191Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237701792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237722194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237908509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.240594931Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.240670738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.240887856Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.240905057Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.240941960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.240959161Z" level=info msg="metadata content store policy set" policy=shared
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241119775Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241144677Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241161778Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241175279Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241187380Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241226984Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241424400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241648818Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241672420Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241685922Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241697423Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241709323Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241719824Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241734226Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241747927Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241758628Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241769428Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241857236Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241881338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241894339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241905440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241916441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241929442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241945843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241968845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241982446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241993947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242006348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242017149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242028350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242040051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242054052Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242071353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242081954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242091855Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242126558Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242141659Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242151360Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242161861Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242170662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242181063Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242190263Z" level=info msg="NRI interface is disabled by configuration."
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242404181Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242441984Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242473787Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242488388Z" level=info msg="containerd successfully booted in 0.036034s"
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.221963311Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.251989072Z" level=info msg="Loading containers: start."
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.477893887Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.597202766Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.690319598Z" level=info msg="Loading containers: done."
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.713867228Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.714011940Z" level=info msg="Daemon has completed initialization"
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.759721887Z" level=info msg="API listen on [::]:2376"
	Sep 10 18:03:29 functional-879800 systemd[1]: Started Docker Application Container Engine.
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.760516852Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.474834842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.477836684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.477986196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.478476035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.510897746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.511034857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.511069460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.511269876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.655061155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.655210967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.655245270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.655529393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.675075567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.675276783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.675387892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.675715318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.709728657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.709965176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.709997579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.713421555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.746807643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.750921375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.751140292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.751395413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.994018751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.994220867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.994425083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.994619599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.435937252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.436180472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.436302481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.436537800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.630888601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.631197725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.631278432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.631740769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.668391192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.668465598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.668552605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.668683415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.773245454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.774814880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.774922088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.775131905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.943943668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.944004773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.944020374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.944179787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.488015566Z" level=info msg="shim disconnected" id=ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19 namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.488335787Z" level=warning msg="cleaning up after shim disconnected" id=ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19 namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.488651308Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4225]: time="2024-09-10T18:03:33.490343523Z" level=info msg="ignoring event" container=ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.872559495Z" level=info msg="shim disconnected" id=ec5fefa9ec4e4d3e503ab66ef7d7ee0d763b84ac316e50857a591b02760be35a namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.873405352Z" level=warning msg="cleaning up after shim disconnected" id=ec5fefa9ec4e4d3e503ab66ef7d7ee0d763b84ac316e50857a591b02760be35a namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.874702540Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4225]: time="2024-09-10T18:03:33.875848617Z" level=info msg="ignoring event" container=ec5fefa9ec4e4d3e503ab66ef7d7ee0d763b84ac316e50857a591b02760be35a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.888771789Z" level=info msg="shim disconnected" id=6e715701f9a8266eb62e2573ca940b61e284d0488b31fc1de779a3b6a25d7d0c namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.888810191Z" level=warning msg="cleaning up after shim disconnected" id=6e715701f9a8266eb62e2573ca940b61e284d0488b31fc1de779a3b6a25d7d0c namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.888819192Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4225]: time="2024-09-10T18:03:33.889035806Z" level=info msg="ignoring event" container=6e715701f9a8266eb62e2573ca940b61e284d0488b31fc1de779a3b6a25d7d0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:33 functional-879800 dockerd[4225]: time="2024-09-10T18:03:33.917781545Z" level=info msg="ignoring event" container=e3923829bb152a21bde50cf3a7e4abc655b6d8765a7c660bda0afaf4417b6e02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.918141469Z" level=info msg="shim disconnected" id=e3923829bb152a21bde50cf3a7e4abc655b6d8765a7c660bda0afaf4417b6e02 namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.918260377Z" level=warning msg="cleaning up after shim disconnected" id=e3923829bb152a21bde50cf3a7e4abc655b6d8765a7c660bda0afaf4417b6e02 namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.918272078Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.944717661Z" level=info msg="shim disconnected" id=a3e714586dcf6b36e8c5c2f012e6cbfa69737cb99e1aa1416f89384322466b2f namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.944835669Z" level=warning msg="cleaning up after shim disconnected" id=a3e714586dcf6b36e8c5c2f012e6cbfa69737cb99e1aa1416f89384322466b2f namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.944917975Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4225]: time="2024-09-10T18:03:33.949717498Z" level=info msg="ignoring event" container=a3e714586dcf6b36e8c5c2f012e6cbfa69737cb99e1aa1416f89384322466b2f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:33 functional-879800 dockerd[4225]: time="2024-09-10T18:03:33.977057742Z" level=info msg="ignoring event" container=ccc45d35827479fb2a2ef42723180398bed51819259e00c6f8727e2de3d54fbd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.977481670Z" level=info msg="shim disconnected" id=ccc45d35827479fb2a2ef42723180398bed51819259e00c6f8727e2de3d54fbd namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.978397232Z" level=warning msg="cleaning up after shim disconnected" id=ccc45d35827479fb2a2ef42723180398bed51819259e00c6f8727e2de3d54fbd namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.978477638Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.000898249Z" level=info msg="shim disconnected" id=07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4 namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.001083962Z" level=warning msg="cleaning up after shim disconnected" id=07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4 namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.001195469Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4225]: time="2024-09-10T18:03:34.002897384Z" level=info msg="ignoring event" container=07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:34 functional-879800 dockerd[4225]: time="2024-09-10T18:03:34.002999991Z" level=info msg="ignoring event" container=976499ca237d00a46e41412bfdd50385936ad33df3496ca98cbb01e91d14a429 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.003118699Z" level=info msg="shim disconnected" id=976499ca237d00a46e41412bfdd50385936ad33df3496ca98cbb01e91d14a429 namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.003189804Z" level=warning msg="cleaning up after shim disconnected" id=976499ca237d00a46e41412bfdd50385936ad33df3496ca98cbb01e91d14a429 namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.003227206Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.018068808Z" level=info msg="shim disconnected" id=5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4225]: time="2024-09-10T18:03:34.018993870Z" level=info msg="ignoring event" container=5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:34 functional-879800 dockerd[4225]: time="2024-09-10T18:03:34.019380696Z" level=info msg="ignoring event" container=d9b410da7bc72151b811c961c3c96f068294a33b4530933fb52e14096ba5a543 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.019830526Z" level=warning msg="cleaning up after shim disconnected" id=5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.019933033Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.025389001Z" level=info msg="shim disconnected" id=d9b410da7bc72151b811c961c3c96f068294a33b4530933fb52e14096ba5a543 namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.027827366Z" level=warning msg="cleaning up after shim disconnected" id=d9b410da7bc72151b811c961c3c96f068294a33b4530933fb52e14096ba5a543 namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.027941873Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.682948754Z" level=info msg="shim disconnected" id=6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.683012358Z" level=warning msg="cleaning up after shim disconnected" id=6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.683023959Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4225]: time="2024-09-10T18:03:34.683898418Z" level=info msg="ignoring event" container=6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.930124226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.930790771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.930965183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.935005555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.935089161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.935102662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.935277974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.936674368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.997074142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.997348360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.997478769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.999360296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.043805095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.044026110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.044152718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.044348231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.110987827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.111252645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.111371953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.111754779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.396595197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.396764509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.396792111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.396943421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.246438750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.247447118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.247714537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.247911850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.490659842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.490759649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.490809152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.491318787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:39 functional-879800 dockerd[4225]: time="2024-09-10T18:03:39.857401153Z" level=error msg="collecting stats for container /k8s_coredns_coredns-6f6b679f8f-266t8_kube-system_e2d0f1c5-7959-4f05-a592-c427855eb2da_1: invalid id: "
	Sep 10 18:03:39 functional-879800 dockerd[4225]: 2024/09/10 18:03:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Sep 10 18:03:43 functional-879800 dockerd[4225]: time="2024-09-10T18:03:43.336001164Z" level=info msg="ignoring event" container=37ad76e1635d5575d28b041349296f0b4bd7fa4abdadb6ab0882db24c65bc2db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:43 functional-879800 dockerd[4232]: time="2024-09-10T18:03:43.336882124Z" level=info msg="shim disconnected" id=37ad76e1635d5575d28b041349296f0b4bd7fa4abdadb6ab0882db24c65bc2db namespace=moby
	Sep 10 18:03:43 functional-879800 dockerd[4232]: time="2024-09-10T18:03:43.337599372Z" level=warning msg="cleaning up after shim disconnected" id=37ad76e1635d5575d28b041349296f0b4bd7fa4abdadb6ab0882db24c65bc2db namespace=moby
	Sep 10 18:03:43 functional-879800 dockerd[4232]: time="2024-09-10T18:03:43.338251216Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:59 functional-879800 dockerd[4225]: time="2024-09-10T18:03:59.091434642Z" level=info msg="ignoring event" container=b547c299d7dd011f02489d4a697d59014bae3219915fe919939c2298c3d0ff72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:59 functional-879800 dockerd[4232]: time="2024-09-10T18:03:59.091715562Z" level=info msg="shim disconnected" id=b547c299d7dd011f02489d4a697d59014bae3219915fe919939c2298c3d0ff72 namespace=moby
	Sep 10 18:03:59 functional-879800 dockerd[4232]: time="2024-09-10T18:03:59.091759665Z" level=warning msg="cleaning up after shim disconnected" id=b547c299d7dd011f02489d4a697d59014bae3219915fe919939c2298c3d0ff72 namespace=moby
	Sep 10 18:03:59 functional-879800 dockerd[4232]: time="2024-09-10T18:03:59.091768466Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.349817160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.349968471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.349989472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.350086679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.579090533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.579554466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.579693676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.579959195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:01 functional-879800 dockerd[4232]: time="2024-09-10T18:04:01.183864383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:01 functional-879800 dockerd[4232]: time="2024-09-10T18:04:01.184172805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:01 functional-879800 dockerd[4232]: time="2024-09-10T18:04:01.184257111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:01 functional-879800 dockerd[4232]: time="2024-09-10T18:04:01.184545932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.142383799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.142727123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.142745425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.142869734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.149939241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.150201360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.150333870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.150634691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.183608960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.183799773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.183823275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.184056992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.748853453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.749108971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.749218279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.749397892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:07:22 functional-879800 systemd[1]: Stopping Docker Application Container Engine...
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.352158112Z" level=info msg="Processing signal 'terminated'"
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.454472074Z" level=info msg="ignoring event" container=ca368efebfef8f375661bc1f7feab0ccfe57faf05aab591cc3730852336e3631 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.455346635Z" level=info msg="shim disconnected" id=ca368efebfef8f375661bc1f7feab0ccfe57faf05aab591cc3730852336e3631 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.455998181Z" level=warning msg="cleaning up after shim disconnected" id=ca368efebfef8f375661bc1f7feab0ccfe57faf05aab591cc3730852336e3631 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.456247098Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.489687239Z" level=info msg="ignoring event" container=e1315bf73debfa4232f692c3e0a0b4b3d6ebfeb080aaca3518075c735293b608 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.492412930Z" level=info msg="shim disconnected" id=e1315bf73debfa4232f692c3e0a0b4b3d6ebfeb080aaca3518075c735293b608 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.492515837Z" level=warning msg="cleaning up after shim disconnected" id=e1315bf73debfa4232f692c3e0a0b4b3d6ebfeb080aaca3518075c735293b608 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.492571841Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.518042523Z" level=info msg="ignoring event" container=eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.520561100Z" level=info msg="shim disconnected" id=eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.520614103Z" level=warning msg="cleaning up after shim disconnected" id=eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.520624804Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.540950327Z" level=info msg="ignoring event" container=b91bb2bb949612368ff63dd975a8cddc51be42f2e2fb07f3abcde14dd62fe3f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.541259949Z" level=info msg="shim disconnected" id=b91bb2bb949612368ff63dd975a8cddc51be42f2e2fb07f3abcde14dd62fe3f6 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.541350255Z" level=warning msg="cleaning up after shim disconnected" id=b91bb2bb949612368ff63dd975a8cddc51be42f2e2fb07f3abcde14dd62fe3f6 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.541390758Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.560853320Z" level=info msg="ignoring event" container=f2e8bfe81fcb7f65d7564b425bc1ec750ed7d5f1c976edbcc00f0a02c9798ccd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.561386757Z" level=info msg="shim disconnected" id=f2e8bfe81fcb7f65d7564b425bc1ec750ed7d5f1c976edbcc00f0a02c9798ccd namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.561431160Z" level=warning msg="cleaning up after shim disconnected" id=f2e8bfe81fcb7f65d7564b425bc1ec750ed7d5f1c976edbcc00f0a02c9798ccd namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.561440261Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.579624034Z" level=info msg="ignoring event" container=7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.580145670Z" level=info msg="shim disconnected" id=7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.580207775Z" level=warning msg="cleaning up after shim disconnected" id=7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.580223876Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.613844829Z" level=info msg="shim disconnected" id=f1a968c18d127ecd7f15f98201cf008290b23e40a654359ea1cf579b756ef34f namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.614009341Z" level=warning msg="cleaning up after shim disconnected" id=f1a968c18d127ecd7f15f98201cf008290b23e40a654359ea1cf579b756ef34f namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.614114948Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.615510946Z" level=info msg="ignoring event" container=f1a968c18d127ecd7f15f98201cf008290b23e40a654359ea1cf579b756ef34f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.621018031Z" level=info msg="shim disconnected" id=32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.621070135Z" level=warning msg="cleaning up after shim disconnected" id=32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.621080036Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.621344354Z" level=info msg="ignoring event" container=32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.636675327Z" level=info msg="ignoring event" container=1441db9085178360c9e1eab3a6f7848a379268e43886cd488c51c72a5ca5469e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.652287020Z" level=info msg="shim disconnected" id=1441db9085178360c9e1eab3a6f7848a379268e43886cd488c51c72a5ca5469e namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.652420729Z" level=warning msg="cleaning up after shim disconnected" id=1441db9085178360c9e1eab3a6f7848a379268e43886cd488c51c72a5ca5469e namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.652544838Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.661946796Z" level=info msg="ignoring event" container=8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.664658486Z" level=info msg="shim disconnected" id=8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.664874601Z" level=warning msg="cleaning up after shim disconnected" id=8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.664892402Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.695825967Z" level=info msg="ignoring event" container=70728a77068efe31fc3de9ed6238fa52a23c9b7aa136e603831e0473b2fbe635 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.696489314Z" level=info msg="shim disconnected" id=70728a77068efe31fc3de9ed6238fa52a23c9b7aa136e603831e0473b2fbe635 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.696534617Z" level=warning msg="cleaning up after shim disconnected" id=70728a77068efe31fc3de9ed6238fa52a23c9b7aa136e603831e0473b2fbe635 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.696545618Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.729004290Z" level=info msg="ignoring event" container=9979cd5778faa784311d35679eda1080be9e80af0ae34b2c1bc500498eb5c114 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.735083615Z" level=info msg="shim disconnected" id=9979cd5778faa784311d35679eda1080be9e80af0ae34b2c1bc500498eb5c114 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.736693128Z" level=warning msg="cleaning up after shim disconnected" id=9979cd5778faa784311d35679eda1080be9e80af0ae34b2c1bc500498eb5c114 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.736835438Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.807212964Z" level=warning msg="cleanup warnings time=\"2024-09-10T18:07:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 10 18:07:27 functional-879800 dockerd[4225]: time="2024-09-10T18:07:27.429406593Z" level=info msg="ignoring event" container=bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:27 functional-879800 dockerd[4232]: time="2024-09-10T18:07:27.429688013Z" level=info msg="shim disconnected" id=bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6 namespace=moby
	Sep 10 18:07:27 functional-879800 dockerd[4232]: time="2024-09-10T18:07:27.430046838Z" level=warning msg="cleaning up after shim disconnected" id=bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6 namespace=moby
	Sep 10 18:07:27 functional-879800 dockerd[4232]: time="2024-09-10T18:07:27.430060039Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:32 functional-879800 dockerd[4225]: time="2024-09-10T18:07:32.442204332Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035
	Sep 10 18:07:32 functional-879800 dockerd[4225]: time="2024-09-10T18:07:32.478648728Z" level=info msg="ignoring event" container=95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:32 functional-879800 dockerd[4232]: time="2024-09-10T18:07:32.479459150Z" level=info msg="shim disconnected" id=95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035 namespace=moby
	Sep 10 18:07:32 functional-879800 dockerd[4232]: time="2024-09-10T18:07:32.479530152Z" level=warning msg="cleaning up after shim disconnected" id=95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035 namespace=moby
	Sep 10 18:07:32 functional-879800 dockerd[4232]: time="2024-09-10T18:07:32.479542052Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:32 functional-879800 dockerd[4225]: time="2024-09-10T18:07:32.543970812Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 10 18:07:32 functional-879800 dockerd[4225]: time="2024-09-10T18:07:32.544248320Z" level=info msg="Daemon shutdown complete"
	Sep 10 18:07:32 functional-879800 dockerd[4225]: time="2024-09-10T18:07:32.544421724Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 10 18:07:32 functional-879800 dockerd[4225]: time="2024-09-10T18:07:32.544458625Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 10 18:07:33 functional-879800 systemd[1]: docker.service: Deactivated successfully.
	Sep 10 18:07:33 functional-879800 systemd[1]: Stopped Docker Application Container Engine.
	Sep 10 18:07:33 functional-879800 systemd[1]: docker.service: Consumed 9.274s CPU time.
	Sep 10 18:07:33 functional-879800 systemd[1]: Starting Docker Application Container Engine...
	Sep 10 18:07:33 functional-879800 dockerd[8778]: time="2024-09-10T18:07:33.598343792Z" level=info msg="Starting up"
	Sep 10 18:08:33 functional-879800 dockerd[8778]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 10 18:08:33 functional-879800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 10 18:08:33 functional-879800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 10 18:08:33 functional-879800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0910 18:08:33.504279    2616 out.go:270] * 
	W0910 18:08:33.507031    2616 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 18:08:33.510319    2616 out.go:201] 
	
	
	==> Docker <==
	Sep 10 18:09:33 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:09:33Z" level=error msg="error getting RW layer size for container ID '8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:09:33 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:09:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7'"
	Sep 10 18:09:33 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:09:33Z" level=error msg="error getting RW layer size for container ID '5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:09:33 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:09:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead'"
	Sep 10 18:09:33 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:09:33Z" level=error msg="error getting RW layer size for container ID '07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:09:33 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:09:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID '07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4'"
	Sep 10 18:09:33 functional-879800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 10 18:09:33 functional-879800 systemd[1]: Failed to start Docker Application Container Engine.
	Sep 10 18:09:33 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:09:33Z" level=error msg="error getting RW layer size for container ID '9979cd5778faa784311d35679eda1080be9e80af0ae34b2c1bc500498eb5c114': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/9979cd5778faa784311d35679eda1080be9e80af0ae34b2c1bc500498eb5c114/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:09:33 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:09:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9979cd5778faa784311d35679eda1080be9e80af0ae34b2c1bc500498eb5c114'"
	Sep 10 18:09:33 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:09:33Z" level=error msg="error getting RW layer size for container ID '7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:09:33 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:09:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79'"
	Sep 10 18:09:33 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:09:33Z" level=error msg="error getting RW layer size for container ID 'eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:09:33 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:09:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426'"
	Sep 10 18:09:33 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:09:33Z" level=error msg="error getting RW layer size for container ID '32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:09:33 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:09:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID '32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740'"
	Sep 10 18:09:33 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:09:33Z" level=error msg="error getting RW layer size for container ID 'ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:09:33 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:09:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19'"
	Sep 10 18:09:33 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:09:33Z" level=error msg="error getting RW layer size for container ID 'bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:09:33 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:09:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6'"
	Sep 10 18:09:33 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:09:33Z" level=error msg="error getting RW layer size for container ID '95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:09:33 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:09:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID '95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035'"
	Sep 10 18:09:34 functional-879800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Sep 10 18:09:34 functional-879800 systemd[1]: Stopped Docker Application Container Engine.
	Sep 10 18:09:34 functional-879800 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-09-10T18:09:36Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep10 18:02] kauditd_printk_skb: 10 callbacks suppressed
	[Sep10 18:03] systemd-fstab-generator[3740]: Ignoring "noauto" option for root device
	[  +0.537677] systemd-fstab-generator[3792]: Ignoring "noauto" option for root device
	[  +0.225711] systemd-fstab-generator[3804]: Ignoring "noauto" option for root device
	[  +0.257523] systemd-fstab-generator[3818]: Ignoring "noauto" option for root device
	[  +5.340147] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.877359] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.186798] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.179954] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.256144] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +1.037441] systemd-fstab-generator[4763]: Ignoring "noauto" option for root device
	[  +3.061765] kauditd_printk_skb: 214 callbacks suppressed
	[  +8.635556] kauditd_printk_skb: 31 callbacks suppressed
	[  +1.985505] systemd-fstab-generator[6074]: Ignoring "noauto" option for root device
	[ +13.788298] kauditd_printk_skb: 17 callbacks suppressed
	[Sep10 18:04] kauditd_printk_skb: 6 callbacks suppressed
	[ +16.694005] systemd-fstab-generator[6726]: Ignoring "noauto" option for root device
	[  +0.162430] kauditd_printk_skb: 23 callbacks suppressed
	[Sep10 18:06] hrtimer: interrupt took 3009709 ns
	[Sep10 18:07] systemd-fstab-generator[8301]: Ignoring "noauto" option for root device
	[  +0.146006] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.426119] systemd-fstab-generator[8351]: Ignoring "noauto" option for root device
	[  +0.222754] systemd-fstab-generator[8363]: Ignoring "noauto" option for root device
	[  +0.259233] systemd-fstab-generator[8377]: Ignoring "noauto" option for root device
	[  +5.236878] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 18:10:34 up 11 min,  0 users,  load average: 0.01, 0.29, 0.25
	Linux functional-879800 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 10 18:10:29 functional-879800 kubelet[6081]: E0910 18:10:29.963710    6081 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m8.49307467s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.152999    6081 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.153145    6081 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.153243    6081 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.153275    6081 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.153337    6081 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.153380    6081 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: I0910 18:10:34.153390    6081 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.153426    6081 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.153444    6081 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.153457    6081 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.153528    6081 kubelet.go:2911] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.153565    6081 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: I0910 18:10:34.153632    6081 setters.go:600] "Node became not ready" node="functional-879800" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-09-10T18:10:34Z","lastTransitionTime":"2024-09-10T18:10:34Z","reason":"KubeletNotReady","message":"[container runtime is down, PLEG is not healthy: pleg was last seen active 3m12.683061233s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"}
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.159626    6081 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.160047    6081 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.160797    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:10:34Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:10:34Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:10:34Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:10:34Z\\\",\\\"lastTransitionTime\\\":\\\"2024-09-10T18:10:34Z\\\",\\\"message\\\":\\\"[container runtime is down, PLEG is not healthy: pleg was last seen active 3m12.683061233s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to g
et docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://Unknown\\\"}}}\" for node \"functional-879800\": Patch \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800/status?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.161984    6081 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.162123    6081 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.162030    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.162608    6081 kubelet.go:1446] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.164037    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.165301    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.169125    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:10:34 functional-879800 kubelet[6081]: E0910 18:10:34.169225    6081 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:09:33.638892    6244 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:09:33.663803    6244 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:09:33.690476    6244 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:09:33.715482    6244 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:09:33.742541    6244 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:09:33.774180    6244 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:09:33.798203    6244 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:09:33.825192    6244 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-879800 -n functional-879800
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-879800 -n functional-879800: exit status 2 (10.2798236s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-879800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (269.63s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (120.72s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-879800 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-879800 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (2.1467561s)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-879800 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-879800 -n functional-879800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-879800 -n functional-879800: exit status 2 (10.0870653s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 logs -n 25: (1m37.7122819s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| unpause | nospam-885900 --log_dir                                                  | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:57 UTC | 10 Sep 24 17:57 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-885900 --log_dir                                                  | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:57 UTC | 10 Sep 24 17:57 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-885900 --log_dir                                                  | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:57 UTC | 10 Sep 24 17:57 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-885900 --log_dir                                                  | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:57 UTC | 10 Sep 24 17:57 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-885900 --log_dir                                                  | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:57 UTC | 10 Sep 24 17:58 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-885900 --log_dir                                                  | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:58 UTC | 10 Sep 24 17:58 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| delete  | -p nospam-885900                                                         | nospam-885900     | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:58 UTC | 10 Sep 24 17:58 UTC |
	| start   | -p functional-879800                                                     | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:58 UTC | 10 Sep 24 18:02 UTC |
	|         | --memory=4000                                                            |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |         |                     |                     |
	| start   | -p functional-879800                                                     | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:02 UTC | 10 Sep 24 18:04 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |                   |         |                     |                     |
	| cache   | functional-879800 cache add                                              | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:04 UTC | 10 Sep 24 18:04 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | functional-879800 cache add                                              | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:04 UTC | 10 Sep 24 18:04 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | functional-879800 cache add                                              | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:04 UTC | 10 Sep 24 18:04 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-879800 cache add                                              | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:04 UTC | 10 Sep 24 18:05 UTC |
	|         | minikube-local-cache-test:functional-879800                              |                   |                   |         |                     |                     |
	| cache   | functional-879800 cache delete                                           | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | minikube-local-cache-test:functional-879800                              |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | list                                                                     | minikube          | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	| ssh     | functional-879800 ssh sudo                                               | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-879800                                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh     | functional-879800 ssh                                                    | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-879800 cache reload                                           | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	| ssh     | functional-879800 ssh                                                    | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-879800 kubectl --                                             | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|         | --context functional-879800                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-879800                                                     | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:06 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:06:15
	Running on machine: minikube5
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:06:15.155104    2616 out.go:345] Setting OutFile to fd 1004 ...
	I0910 18:06:15.204899    2616 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:06:15.204899    2616 out.go:358] Setting ErrFile to fd 824...
	I0910 18:06:15.204899    2616 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:06:15.221913    2616 out.go:352] Setting JSON to false
	I0910 18:06:15.223910    2616 start.go:129] hostinfo: {"hostname":"minikube5","uptime":102838,"bootTime":1725888736,"procs":180,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4842 Build 19045.4842","kernelVersion":"10.0.19045.4842 Build 19045.4842","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0910 18:06:15.223910    2616 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 18:06:15.229175    2616 out.go:177] * [functional-879800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	I0910 18:06:15.233026    2616 notify.go:220] Checking for updates...
	I0910 18:06:15.238765    2616 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:06:15.240640    2616 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:06:15.243256    2616 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0910 18:06:15.245190    2616 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:06:15.252260    2616 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:06:15.255131    2616 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:06:15.255321    2616 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:06:19.892658    2616 out.go:177] * Using the hyperv driver based on existing profile
	I0910 18:06:19.896282    2616 start.go:297] selected driver: hyperv
	I0910 18:06:19.896282    2616 start.go:901] validating driver "hyperv" against &{Name:functional-879800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:functional-879800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.92 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:06:19.896282    2616 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:06:19.937003    2616 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:06:19.937003    2616 cni.go:84] Creating CNI manager for ""
	I0910 18:06:19.937003    2616 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 18:06:19.937535    2616 start.go:340] cluster config:
	{Name:functional-879800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-879800 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.92 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:06:19.937614    2616 iso.go:125] acquiring lock: {Name:mkb663b978f90cccd76348bddb3ea027f6ac4252 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:06:19.940819    2616 out.go:177] * Starting "functional-879800" primary control-plane node in "functional-879800" cluster
	I0910 18:06:19.943040    2616 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 18:06:19.943040    2616 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0910 18:06:19.943040    2616 cache.go:56] Caching tarball of preloaded images
	I0910 18:06:19.943040    2616 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0910 18:06:19.943040    2616 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 18:06:19.943040    2616 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-879800\config.json ...
	I0910 18:06:19.945109    2616 start.go:360] acquireMachinesLock for functional-879800: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:06:19.945109    2616 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-879800"
	I0910 18:06:19.945109    2616 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:06:19.945109    2616 fix.go:54] fixHost starting: 
	I0910 18:06:19.946114    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:06:22.380354    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:06:22.380354    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:22.380354    2616 fix.go:112] recreateIfNeeded on functional-879800: state=Running err=<nil>
	W0910 18:06:22.380354    2616 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:06:22.384900    2616 out.go:177] * Updating the running hyperv "functional-879800" VM ...
	I0910 18:06:22.388001    2616 machine.go:93] provisionDockerMachine start ...
	I0910 18:06:22.388001    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:06:24.287582    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:06:24.287582    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:24.288143    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:06:26.556392    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:06:26.556392    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:26.560310    2616 main.go:141] libmachine: Using SSH client type: native
	I0910 18:06:26.560310    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:06:26.560310    2616 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:06:26.690333    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-879800
	
	I0910 18:06:26.690333    2616 buildroot.go:166] provisioning hostname "functional-879800"
	I0910 18:06:26.690333    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:06:28.568756    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:06:28.568756    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:28.569034    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:06:30.830536    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:06:30.830536    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:30.836919    2616 main.go:141] libmachine: Using SSH client type: native
	I0910 18:06:30.836919    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:06:30.836919    2616 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-879800 && echo "functional-879800" | sudo tee /etc/hostname
	I0910 18:06:30.993094    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-879800
	
	I0910 18:06:30.993094    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:06:32.829711    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:06:32.829711    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:32.829795    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:06:34.974069    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:06:34.974069    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:34.982625    2616 main.go:141] libmachine: Using SSH client type: native
	I0910 18:06:34.982684    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:06:34.982684    2616 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-879800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-879800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-879800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:06:35.118532    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:06:35.118532    2616 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0910 18:06:35.118532    2616 buildroot.go:174] setting up certificates
	I0910 18:06:35.118532    2616 provision.go:84] configureAuth start
	I0910 18:06:35.118532    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:06:36.893851    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:06:36.893851    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:36.903186    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:06:39.020715    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:06:39.030449    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:39.030449    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:06:40.803926    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:06:40.803926    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:40.803926    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:06:42.966621    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:06:42.976571    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:42.976623    2616 provision.go:143] copyHostCerts
	I0910 18:06:42.976623    2616 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0910 18:06:42.976623    2616 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0910 18:06:42.977198    2616 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0910 18:06:42.977780    2616 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0910 18:06:42.977780    2616 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0910 18:06:42.978374    2616 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0910 18:06:42.978947    2616 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0910 18:06:42.978947    2616 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0910 18:06:42.979478    2616 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0910 18:06:42.980203    2616 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-879800 san=[127.0.0.1 172.31.208.92 functional-879800 localhost minikube]
	I0910 18:06:43.090179    2616 provision.go:177] copyRemoteCerts
	I0910 18:06:43.105271    2616 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:06:43.105271    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:06:44.854813    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:06:44.854813    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:44.854813    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:06:47.047536    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:06:47.063030    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:47.063113    2616 sshutil.go:53] new ssh client: &{IP:172.31.208.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-879800\id_rsa Username:docker}
	I0910 18:06:47.167499    2616 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.0618484s)
	I0910 18:06:47.167499    2616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 18:06:47.207475    2616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0910 18:06:47.247552    2616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:06:47.290551    2616 provision.go:87] duration metric: took 12.1711956s to configureAuth
	I0910 18:06:47.290551    2616 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:06:47.291138    2616 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:06:47.291218    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:06:49.034338    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:06:49.034338    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:49.034338    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:06:51.225619    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:06:51.225619    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:51.239392    2616 main.go:141] libmachine: Using SSH client type: native
	I0910 18:06:51.239392    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:06:51.239392    2616 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 18:06:51.371331    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 18:06:51.371331    2616 buildroot.go:70] root file system type: tmpfs
	I0910 18:06:51.371859    2616 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 18:06:51.371944    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:06:53.156183    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:06:53.156183    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:53.156183    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:06:55.304702    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:06:55.304702    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:55.308087    2616 main.go:141] libmachine: Using SSH client type: native
	I0910 18:06:55.308655    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:06:55.308655    2616 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 18:06:55.462192    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 18:06:55.462192    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:06:57.243196    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:06:57.251552    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:57.251552    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:06:59.377939    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:06:59.377939    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:06:59.390511    2616 main.go:141] libmachine: Using SSH client type: native
	I0910 18:06:59.390938    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:06:59.390938    2616 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 18:06:59.532158    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:06:59.532215    2616 machine.go:96] duration metric: took 37.1416436s to provisionDockerMachine
	I0910 18:06:59.532245    2616 start.go:293] postStartSetup for "functional-879800" (driver="hyperv")
	I0910 18:06:59.532245    2616 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:06:59.541735    2616 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:06:59.541895    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:07:01.353491    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:07:01.353491    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:01.363153    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:07:03.486271    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:07:03.486271    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:03.495332    2616 sshutil.go:53] new ssh client: &{IP:172.31.208.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-879800\id_rsa Username:docker}
	I0910 18:07:03.603791    2616 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.0617819s)
	I0910 18:07:03.613220    2616 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:07:03.619386    2616 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:07:03.619449    2616 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0910 18:07:03.619946    2616 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0910 18:07:03.621156    2616 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> 47242.pem in /etc/ssl/certs
	I0910 18:07:03.621752    2616 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\4724\hosts -> hosts in /etc/test/nested/copy/4724
	I0910 18:07:03.628177    2616 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4724
	I0910 18:07:03.645956    2616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /etc/ssl/certs/47242.pem (1708 bytes)
	I0910 18:07:03.688834    2616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\4724\hosts --> /etc/test/nested/copy/4724/hosts (40 bytes)
	I0910 18:07:03.730721    2616 start.go:296] duration metric: took 4.1981923s for postStartSetup
	I0910 18:07:03.730721    2616 fix.go:56] duration metric: took 43.7826508s for fixHost
	I0910 18:07:03.730721    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:07:05.520241    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:07:05.529608    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:05.529608    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:07:07.742608    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:07:07.742608    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:07.761535    2616 main.go:141] libmachine: Using SSH client type: native
	I0910 18:07:07.761535    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:07:07.761535    2616 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:07:07.892513    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725991628.118143582
	
	I0910 18:07:07.892513    2616 fix.go:216] guest clock: 1725991628.118143582
	I0910 18:07:07.892513    2616 fix.go:229] Guest: 2024-09-10 18:07:08.118143582 +0000 UTC Remote: 2024-09-10 18:07:03.7307216 +0000 UTC m=+48.637658501 (delta=4.387421982s)
	I0910 18:07:07.892594    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:07:09.699959    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:07:09.709896    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:09.709896    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:07:11.876478    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:07:11.876478    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:11.889427    2616 main.go:141] libmachine: Using SSH client type: native
	I0910 18:07:11.889928    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.208.92 22 <nil> <nil>}
	I0910 18:07:11.889928    2616 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1725991627
	I0910 18:07:12.030021    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Sep 10 18:07:07 UTC 2024
	
	I0910 18:07:12.030054    2616 fix.go:236] clock set: Tue Sep 10 18:07:07 UTC 2024
	 (err=<nil>)
	I0910 18:07:12.030085    2616 start.go:83] releasing machines lock for "functional-879800", held for 52.0814544s
	I0910 18:07:12.030235    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:07:13.852840    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:07:13.852840    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:13.862658    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:07:16.085578    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:07:16.085578    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:16.088184    2616 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0910 18:07:16.088184    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:07:16.097387    2616 ssh_runner.go:195] Run: cat /version.json
	I0910 18:07:16.098952    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
	I0910 18:07:17.996190    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:07:17.996190    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:17.996272    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:07:17.996272    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:07:17.996272    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:17.996797    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
	I0910 18:07:20.269206    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:07:20.269206    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:20.279230    2616 sshutil.go:53] new ssh client: &{IP:172.31.208.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-879800\id_rsa Username:docker}
	I0910 18:07:20.300225    2616 main.go:141] libmachine: [stdout =====>] : 172.31.208.92
	
	I0910 18:07:20.300225    2616 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:07:20.300820    2616 sshutil.go:53] new ssh client: &{IP:172.31.208.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-879800\id_rsa Username:docker}
	I0910 18:07:20.369072    2616 ssh_runner.go:235] Completed: cat /version.json: (4.2713197s)
	I0910 18:07:20.379719    2616 ssh_runner.go:195] Run: systemctl --version
	I0910 18:07:20.384502    2616 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.2958749s)
	W0910 18:07:20.384566    2616 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0910 18:07:20.399258    2616 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:07:20.409641    2616 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:07:20.419072    2616 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:07:20.435320    2616 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0910 18:07:20.435320    2616 start.go:495] detecting cgroup driver to use...
	I0910 18:07:20.435320    2616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:07:20.475600    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0910 18:07:20.500662    2616 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0910 18:07:20.500662    2616 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0910 18:07:20.504135    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 18:07:20.524549    2616 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 18:07:20.535338    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 18:07:20.566752    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 18:07:20.595989    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 18:07:20.623069    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 18:07:20.650316    2616 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:07:20.678358    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 18:07:20.707021    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 18:07:20.740126    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 18:07:20.768558    2616 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:07:20.796818    2616 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:07:20.824008    2616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:07:21.069564    2616 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 18:07:21.093217    2616 start.go:495] detecting cgroup driver to use...
	I0910 18:07:21.107089    2616 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 18:07:21.148922    2616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:07:21.186782    2616 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:07:21.238467    2616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:07:21.269109    2616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 18:07:21.290198    2616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:07:21.331151    2616 ssh_runner.go:195] Run: which cri-dockerd
	I0910 18:07:21.345206    2616 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 18:07:21.359817    2616 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 18:07:21.402047    2616 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 18:07:21.624751    2616 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 18:07:21.835049    2616 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 18:07:21.835280    2616 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 18:07:21.876048    2616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:07:22.104702    2616 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 18:08:33.411013    2616 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3015062s)
	I0910 18:08:33.421599    2616 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0910 18:08:33.490586    2616 out.go:201] 
	W0910 18:08:33.500999    2616 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 10 18:00:18 functional-879800 systemd[1]: Starting Docker Application Container Engine...
	Sep 10 18:00:18 functional-879800 dockerd[654]: time="2024-09-10T18:00:18.048484724Z" level=info msg="Starting up"
	Sep 10 18:00:18 functional-879800 dockerd[654]: time="2024-09-10T18:00:18.049311817Z" level=info msg="containerd not running, starting managed containerd"
	Sep 10 18:00:18 functional-879800 dockerd[654]: time="2024-09-10T18:00:18.050378195Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=660
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.085669997Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113152633Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113198249Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113255169Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113273076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113351804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113365408Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113555476Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113593289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113608394Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113620399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113738040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.113952016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.117262689Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.117369927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.117542988Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.117638322Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.117800380Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.117961036Z" level=info msg="metadata content store policy set" policy=shared
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.143566607Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.143812694Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.143852008Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.143870115Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.143885420Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144004162Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144431814Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144578366Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144731920Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144769633Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144785139Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144800544Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144817951Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144833256Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144852263Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144874671Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144890676Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144903781Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144926489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144952798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144968904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144985010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.144998314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145012219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145026424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145041330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145055335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145071540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145083945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145098250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145125159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145150368Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145174477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145188082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145200886Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145248603Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145268610Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145281715Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145295520Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145308424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145321929Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145343637Z" level=info msg="NRI interface is disabled by configuration."
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145564815Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145769388Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145831810Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 10 18:00:18 functional-879800 dockerd[660]: time="2024-09-10T18:00:18.145867622Z" level=info msg="containerd successfully booted in 0.061994s"
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.121888725Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.154996562Z" level=info msg="Loading containers: start."
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.319648547Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.537934908Z" level=info msg="Loading containers: done."
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.562684996Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.563059323Z" level=info msg="Daemon has completed initialization"
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.675990996Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 10 18:00:19 functional-879800 systemd[1]: Started Docker Application Container Engine.
	Sep 10 18:00:19 functional-879800 dockerd[654]: time="2024-09-10T18:00:19.677273131Z" level=info msg="API listen on [::]:2376"
	Sep 10 18:00:47 functional-879800 systemd[1]: Stopping Docker Application Container Engine...
	Sep 10 18:00:47 functional-879800 dockerd[654]: time="2024-09-10T18:00:47.375581804Z" level=info msg="Processing signal 'terminated'"
	Sep 10 18:00:47 functional-879800 dockerd[654]: time="2024-09-10T18:00:47.377311327Z" level=info msg="Daemon shutdown complete"
	Sep 10 18:00:47 functional-879800 dockerd[654]: time="2024-09-10T18:00:47.377382432Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 10 18:00:47 functional-879800 dockerd[654]: time="2024-09-10T18:00:47.377399033Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 10 18:00:47 functional-879800 dockerd[654]: time="2024-09-10T18:00:47.377426335Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Sep 10 18:00:48 functional-879800 systemd[1]: docker.service: Deactivated successfully.
	Sep 10 18:00:48 functional-879800 systemd[1]: Stopped Docker Application Container Engine.
	Sep 10 18:00:48 functional-879800 systemd[1]: Starting Docker Application Container Engine...
	Sep 10 18:00:48 functional-879800 dockerd[1071]: time="2024-09-10T18:00:48.429522254Z" level=info msg="Starting up"
	Sep 10 18:00:48 functional-879800 dockerd[1071]: time="2024-09-10T18:00:48.431008259Z" level=info msg="containerd not running, starting managed containerd"
	Sep 10 18:00:48 functional-879800 dockerd[1071]: time="2024-09-10T18:00:48.432192643Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1077
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.459542277Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.483918202Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.483975206Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484022209Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484041010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484077013Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484097014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484291928Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484426638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484453740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484468941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484501543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.484661254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.487483054Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.487608563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.487911684Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.487954587Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.487995390Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.488021392Z" level=info msg="metadata content store policy set" policy=shared
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.488424620Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.488479324Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.488499926Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.488520527Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.488550029Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.488615634Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.489261880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.489448693Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.489569201Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.489694710Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.489718112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.489736913Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490086038Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490114540Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490146942Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490166844Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490183545Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490199646Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490225648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490251750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490274451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490293053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490318254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490335756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490352257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490374458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490393260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490413461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490429562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490445963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490463665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490485466Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490512768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490632877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490655178Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.490865893Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491038605Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491056507Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491073508Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491087909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491113411Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491129912Z" level=info msg="NRI interface is disabled by configuration."
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491451335Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.491783358Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.492068478Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 10 18:00:48 functional-879800 dockerd[1077]: time="2024-09-10T18:00:48.492180986Z" level=info msg="containerd successfully booted in 0.033661s"
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.478401846Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.503818243Z" level=info msg="Loading containers: start."
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.627367383Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.742364917Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.836564080Z" level=info msg="Loading containers: done."
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.856624799Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.856752008Z" level=info msg="Daemon has completed initialization"
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.901724089Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 10 18:00:49 functional-879800 dockerd[1071]: time="2024-09-10T18:00:49.901914602Z" level=info msg="API listen on [::]:2376"
	Sep 10 18:00:49 functional-879800 systemd[1]: Started Docker Application Container Engine.
	Sep 10 18:00:58 functional-879800 systemd[1]: Stopping Docker Application Container Engine...
	Sep 10 18:00:58 functional-879800 dockerd[1071]: time="2024-09-10T18:00:58.351344467Z" level=info msg="Processing signal 'terminated'"
	Sep 10 18:00:58 functional-879800 dockerd[1071]: time="2024-09-10T18:00:58.352684961Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 10 18:00:58 functional-879800 dockerd[1071]: time="2024-09-10T18:00:58.352996783Z" level=info msg="Daemon shutdown complete"
	Sep 10 18:00:58 functional-879800 dockerd[1071]: time="2024-09-10T18:00:58.353065188Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 10 18:00:58 functional-879800 dockerd[1071]: time="2024-09-10T18:00:58.353083690Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 10 18:00:59 functional-879800 systemd[1]: docker.service: Deactivated successfully.
	Sep 10 18:00:59 functional-879800 systemd[1]: Stopped Docker Application Container Engine.
	Sep 10 18:00:59 functional-879800 systemd[1]: Starting Docker Application Container Engine...
	Sep 10 18:00:59 functional-879800 dockerd[1425]: time="2024-09-10T18:00:59.401497648Z" level=info msg="Starting up"
	Sep 10 18:00:59 functional-879800 dockerd[1425]: time="2024-09-10T18:00:59.402190497Z" level=info msg="containerd not running, starting managed containerd"
	Sep 10 18:00:59 functional-879800 dockerd[1425]: time="2024-09-10T18:00:59.403078760Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1431
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.428368049Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.449998479Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450096286Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450129688Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450142389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450164791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450174991Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450305701Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450395907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450413008Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450422409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450452311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.450544818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.453475725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.453561631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.453687940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.453768346Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.453918756Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454018963Z" level=info msg="metadata content store policy set" policy=shared
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454243479Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454324385Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454344286Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454358687Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454371288Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454409991Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454713813Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454920227Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454938528Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454951729Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454965530Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454978831Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.454991632Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455008133Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455022934Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455040736Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455054637Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455066637Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455086439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455104040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455117041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455130342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455142643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455155844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455168445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455181646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455194647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455209148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455220848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455233149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455246050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455266652Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455286253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455299754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455311255Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455351858Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455369459Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455382260Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455395761Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455406762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455419762Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455430363Z" level=info msg="NRI interface is disabled by configuration."
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455625977Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.455965501Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.456012404Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 10 18:00:59 functional-879800 dockerd[1431]: time="2024-09-10T18:00:59.456047007Z" level=info msg="containerd successfully booted in 0.028476s"
	Sep 10 18:01:00 functional-879800 dockerd[1425]: time="2024-09-10T18:01:00.554280190Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.339558604Z" level=info msg="Loading containers: start."
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.470992401Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.589342572Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.687264299Z" level=info msg="Loading containers: done."
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.712100256Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.712238065Z" level=info msg="Daemon has completed initialization"
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.753412878Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 10 18:01:03 functional-879800 systemd[1]: Started Docker Application Container Engine.
	Sep 10 18:01:03 functional-879800 dockerd[1425]: time="2024-09-10T18:01:03.757907896Z" level=info msg="API listen on [::]:2376"
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.843562023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.843615227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.843627628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.843743036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.881096164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.881214473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.881246876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.881574000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.918963031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.919037037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.919054638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.923719891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.926194878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.926268884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.926345190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:11 functional-879800 dockerd[1431]: time="2024-09-10T18:01:11.926692516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.293309781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.293378786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.293422989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.293516496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.380781776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.380924186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.381002392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.381188906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.424221851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.424448368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.424528474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.424720389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.427031863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.427122170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.427138071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:12 functional-879800 dockerd[1431]: time="2024-09-10T18:01:12.427705214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.596259546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.596394956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.596410457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.596594870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.977439827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.977546135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.977564036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:24 functional-879800 dockerd[1431]: time="2024-09-10T18:01:24.977701946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.012423876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.012517682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.012536184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.012641591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.036026293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.036651138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.036822651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.037129473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.681454980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.681601191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.681638693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.681841508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.716309594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.716465805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.716490007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:25 functional-879800 dockerd[1431]: time="2024-09-10T18:01:25.716597815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.277424206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.277585217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.277609619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.277749028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.549402567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.549468572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.549487373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:31 functional-879800 dockerd[1431]: time="2024-09-10T18:01:31.549601481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:01:35 functional-879800 dockerd[1425]: time="2024-09-10T18:01:35.869111260Z" level=info msg="ignoring event" container=f8fdd3b265468b67be1adcf36a8550e3434d52705793e41c0a4aa20a6347af10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:01:35 functional-879800 dockerd[1431]: time="2024-09-10T18:01:35.870287741Z" level=info msg="shim disconnected" id=f8fdd3b265468b67be1adcf36a8550e3434d52705793e41c0a4aa20a6347af10 namespace=moby
	Sep 10 18:01:35 functional-879800 dockerd[1431]: time="2024-09-10T18:01:35.870471854Z" level=warning msg="cleaning up after shim disconnected" id=f8fdd3b265468b67be1adcf36a8550e3434d52705793e41c0a4aa20a6347af10 namespace=moby
	Sep 10 18:01:35 functional-879800 dockerd[1431]: time="2024-09-10T18:01:35.870482255Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:01:36 functional-879800 dockerd[1431]: time="2024-09-10T18:01:36.047551699Z" level=info msg="shim disconnected" id=0fbb75cf9db50f56284994377744954858ba1a907974bd24a03fe204d61c7a95 namespace=moby
	Sep 10 18:01:36 functional-879800 dockerd[1425]: time="2024-09-10T18:01:36.047769414Z" level=info msg="ignoring event" container=0fbb75cf9db50f56284994377744954858ba1a907974bd24a03fe204d61c7a95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:01:36 functional-879800 dockerd[1431]: time="2024-09-10T18:01:36.047937726Z" level=warning msg="cleaning up after shim disconnected" id=0fbb75cf9db50f56284994377744954858ba1a907974bd24a03fe204d61c7a95 namespace=moby
	Sep 10 18:01:36 functional-879800 dockerd[1431]: time="2024-09-10T18:01:36.047999430Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:16 functional-879800 systemd[1]: Stopping Docker Application Container Engine...
	Sep 10 18:03:16 functional-879800 dockerd[1425]: time="2024-09-10T18:03:16.937916216Z" level=info msg="Processing signal 'terminated'"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.104463094Z" level=info msg="shim disconnected" id=b0e69451632d78db5e981243b5f1d242d579a9cb2b3e395174eb8c40be18ef68 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.105821089Z" level=info msg="ignoring event" container=b0e69451632d78db5e981243b5f1d242d579a9cb2b3e395174eb8c40be18ef68 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.106062106Z" level=warning msg="cleaning up after shim disconnected" id=b0e69451632d78db5e981243b5f1d242d579a9cb2b3e395174eb8c40be18ef68 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.106172314Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.155843996Z" level=info msg="ignoring event" container=7acd3a1687c63b8c19da613fcc79f3ba70ee7bdccf6bfa44b405956d9a17e01a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.159695566Z" level=info msg="shim disconnected" id=7acd3a1687c63b8c19da613fcc79f3ba70ee7bdccf6bfa44b405956d9a17e01a namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.159819375Z" level=warning msg="cleaning up after shim disconnected" id=7acd3a1687c63b8c19da613fcc79f3ba70ee7bdccf6bfa44b405956d9a17e01a namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.159869979Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.162078434Z" level=info msg="ignoring event" container=19fb530e5214fd7fb18cc260293e9cea4c56f4ca100b000374ac5e148404a252 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.166101616Z" level=info msg="shim disconnected" id=19fb530e5214fd7fb18cc260293e9cea4c56f4ca100b000374ac5e148404a252 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.166157220Z" level=warning msg="cleaning up after shim disconnected" id=19fb530e5214fd7fb18cc260293e9cea4c56f4ca100b000374ac5e148404a252 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.166167620Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.176262428Z" level=info msg="ignoring event" container=c0055974541d4cbd1c38c909cedf16c2ed8f6961f77502ddc6a666599fa123cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.185350765Z" level=info msg="shim disconnected" id=c0055974541d4cbd1c38c909cedf16c2ed8f6961f77502ddc6a666599fa123cc namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.185385668Z" level=warning msg="cleaning up after shim disconnected" id=c0055974541d4cbd1c38c909cedf16c2ed8f6961f77502ddc6a666599fa123cc namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.185396969Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.186332834Z" level=info msg="ignoring event" container=443d0b1bfd8f94fe9a5f42d5018ef70e241d0c811e2829414b69c6a7a70ff38e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.188048755Z" level=info msg="shim disconnected" id=443d0b1bfd8f94fe9a5f42d5018ef70e241d0c811e2829414b69c6a7a70ff38e namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.188830609Z" level=warning msg="cleaning up after shim disconnected" id=443d0b1bfd8f94fe9a5f42d5018ef70e241d0c811e2829414b69c6a7a70ff38e namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.188906315Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.200281212Z" level=info msg="ignoring event" container=f4c45e58e5ad749c5b6ac4af995a6c64ffc1454bdb9e450711e7d05383ac6e88 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.200529330Z" level=info msg="shim disconnected" id=f4c45e58e5ad749c5b6ac4af995a6c64ffc1454bdb9e450711e7d05383ac6e88 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.200784748Z" level=warning msg="cleaning up after shim disconnected" id=f4c45e58e5ad749c5b6ac4af995a6c64ffc1454bdb9e450711e7d05383ac6e88 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.200839551Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.217470818Z" level=info msg="shim disconnected" id=2fded56066ba805dd117038086ed87c1c07f3b8f07964442e237b58e1f5c1af9 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.217710534Z" level=warning msg="cleaning up after shim disconnected" id=2fded56066ba805dd117038086ed87c1c07f3b8f07964442e237b58e1f5c1af9 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.217740836Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.219247142Z" level=info msg="ignoring event" container=2fded56066ba805dd117038086ed87c1c07f3b8f07964442e237b58e1f5c1af9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.225671793Z" level=info msg="shim disconnected" id=afe36a9485ea5c3fb74437c46ad24d9fc5481951b877b6ee8d2994f6a2ddc7d8 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.225916110Z" level=warning msg="cleaning up after shim disconnected" id=afe36a9485ea5c3fb74437c46ad24d9fc5481951b877b6ee8d2994f6a2ddc7d8 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.226028518Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.247117296Z" level=info msg="ignoring event" container=afe36a9485ea5c3fb74437c46ad24d9fc5481951b877b6ee8d2994f6a2ddc7d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.247437619Z" level=info msg="ignoring event" container=d1a139975a680fdec8a1d64f0b916ac536e20a4dc7a0300b5fa144c176679c27 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.251142178Z" level=info msg="shim disconnected" id=d1a139975a680fdec8a1d64f0b916ac536e20a4dc7a0300b5fa144c176679c27 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.251512604Z" level=warning msg="cleaning up after shim disconnected" id=d1a139975a680fdec8a1d64f0b916ac536e20a4dc7a0300b5fa144c176679c27 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.251636113Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.276859782Z" level=info msg="shim disconnected" id=0e83830f0f0ea0b4df02286ef66171d35a7aba6988e545ec7dbdd8a0e7f72148 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.277179904Z" level=warning msg="cleaning up after shim disconnected" id=0e83830f0f0ea0b4df02286ef66171d35a7aba6988e545ec7dbdd8a0e7f72148 namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.277504227Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.283016913Z" level=info msg="shim disconnected" id=c72de8d88c7aba63faf855d522ae12d6f047e03d0461df47e86e59ce16212e7b namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.283534650Z" level=warning msg="cleaning up after shim disconnected" id=c72de8d88c7aba63faf855d522ae12d6f047e03d0461df47e86e59ce16212e7b namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.283722663Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.286985492Z" level=info msg="shim disconnected" id=9040449f00775036426d6df47306914e57c2bc2fba000ed833d7019bf407398c namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.287105900Z" level=warning msg="cleaning up after shim disconnected" id=9040449f00775036426d6df47306914e57c2bc2fba000ed833d7019bf407398c namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.287116101Z" level=info msg="ignoring event" container=0e83830f0f0ea0b4df02286ef66171d35a7aba6988e545ec7dbdd8a0e7f72148 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.287148703Z" level=info msg="ignoring event" container=c72de8d88c7aba63faf855d522ae12d6f047e03d0461df47e86e59ce16212e7b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1425]: time="2024-09-10T18:03:17.287236709Z" level=info msg="ignoring event" container=9040449f00775036426d6df47306914e57c2bc2fba000ed833d7019bf407398c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.287482627Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:17 functional-879800 dockerd[1431]: time="2024-09-10T18:03:17.379983012Z" level=warning msg="cleanup warnings time=\"2024-09-10T18:03:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 10 18:03:22 functional-879800 dockerd[1431]: time="2024-09-10T18:03:22.099065101Z" level=info msg="shim disconnected" id=0fe7875f1ea8d61fc0c9e87d00b4d91469ae530abf0233a722959f81e0194889 namespace=moby
	Sep 10 18:03:22 functional-879800 dockerd[1425]: time="2024-09-10T18:03:22.099724747Z" level=info msg="ignoring event" container=0fe7875f1ea8d61fc0c9e87d00b4d91469ae530abf0233a722959f81e0194889 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:22 functional-879800 dockerd[1431]: time="2024-09-10T18:03:22.100757719Z" level=warning msg="cleaning up after shim disconnected" id=0fe7875f1ea8d61fc0c9e87d00b4d91469ae530abf0233a722959f81e0194889 namespace=moby
	Sep 10 18:03:22 functional-879800 dockerd[1431]: time="2024-09-10T18:03:22.101024838Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:27 functional-879800 dockerd[1425]: time="2024-09-10T18:03:27.012561677Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=935ff5cd94293207d9b8ff0882b7a61b6fd9dab977003e5d0bbe330c64fb0572
	Sep 10 18:03:27 functional-879800 dockerd[1425]: time="2024-09-10T18:03:27.051920467Z" level=info msg="ignoring event" container=935ff5cd94293207d9b8ff0882b7a61b6fd9dab977003e5d0bbe330c64fb0572 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:27 functional-879800 dockerd[1431]: time="2024-09-10T18:03:27.052656729Z" level=info msg="shim disconnected" id=935ff5cd94293207d9b8ff0882b7a61b6fd9dab977003e5d0bbe330c64fb0572 namespace=moby
	Sep 10 18:03:27 functional-879800 dockerd[1431]: time="2024-09-10T18:03:27.052756037Z" level=warning msg="cleaning up after shim disconnected" id=935ff5cd94293207d9b8ff0882b7a61b6fd9dab977003e5d0bbe330c64fb0572 namespace=moby
	Sep 10 18:03:27 functional-879800 dockerd[1431]: time="2024-09-10T18:03:27.052802441Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:27 functional-879800 dockerd[1425]: time="2024-09-10T18:03:27.115701599Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 10 18:03:27 functional-879800 dockerd[1425]: time="2024-09-10T18:03:27.116521267Z" level=info msg="Daemon shutdown complete"
	Sep 10 18:03:27 functional-879800 dockerd[1425]: time="2024-09-10T18:03:27.116719684Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 10 18:03:27 functional-879800 dockerd[1425]: time="2024-09-10T18:03:27.116799091Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 10 18:03:28 functional-879800 systemd[1]: docker.service: Deactivated successfully.
	Sep 10 18:03:28 functional-879800 systemd[1]: Stopped Docker Application Container Engine.
	Sep 10 18:03:28 functional-879800 systemd[1]: docker.service: Consumed 4.993s CPU time.
	Sep 10 18:03:28 functional-879800 systemd[1]: Starting Docker Application Container Engine...
	Sep 10 18:03:28 functional-879800 dockerd[4225]: time="2024-09-10T18:03:28.177678725Z" level=info msg="Starting up"
	Sep 10 18:03:28 functional-879800 dockerd[4225]: time="2024-09-10T18:03:28.178393684Z" level=info msg="containerd not running, starting managed containerd"
	Sep 10 18:03:28 functional-879800 dockerd[4225]: time="2024-09-10T18:03:28.179695692Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=4232
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.207359481Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237220152Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237314660Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237345862Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237360364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237381065Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237390766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237533778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237671289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237687191Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237701792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237722194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.237908509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.240594931Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.240670738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.240887856Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.240905057Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.240941960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.240959161Z" level=info msg="metadata content store policy set" policy=shared
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241119775Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241144677Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241161778Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241175279Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241187380Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241226984Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241424400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241648818Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241672420Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241685922Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241697423Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241709323Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241719824Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241734226Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241747927Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241758628Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241769428Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241857236Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241881338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241894339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241905440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241916441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241929442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241945843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241968845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241982446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.241993947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242006348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242017149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242028350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242040051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242054052Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242071353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242081954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242091855Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242126558Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242141659Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242151360Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242161861Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242170662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242181063Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242190263Z" level=info msg="NRI interface is disabled by configuration."
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242404181Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242441984Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242473787Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 10 18:03:28 functional-879800 dockerd[4232]: time="2024-09-10T18:03:28.242488388Z" level=info msg="containerd successfully booted in 0.036034s"
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.221963311Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.251989072Z" level=info msg="Loading containers: start."
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.477893887Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.597202766Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.690319598Z" level=info msg="Loading containers: done."
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.713867228Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.714011940Z" level=info msg="Daemon has completed initialization"
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.759721887Z" level=info msg="API listen on [::]:2376"
	Sep 10 18:03:29 functional-879800 systemd[1]: Started Docker Application Container Engine.
	Sep 10 18:03:29 functional-879800 dockerd[4225]: time="2024-09-10T18:03:29.760516852Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.474834842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.477836684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.477986196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.478476035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.510897746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.511034857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.511069460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.511269876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.655061155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.655210967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.655245270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.655529393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.675075567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.675276783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.675387892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.675715318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.709728657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.709965176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.709997579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.713421555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.746807643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.750921375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.751140292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.751395413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.994018751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.994220867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.994425083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:31 functional-879800 dockerd[4232]: time="2024-09-10T18:03:31.994619599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.435937252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.436180472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.436302481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.436537800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.630888601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.631197725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.631278432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.631740769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.668391192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.668465598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.668552605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.668683415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.773245454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.774814880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.774922088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.775131905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.943943668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.944004773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.944020374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:32 functional-879800 dockerd[4232]: time="2024-09-10T18:03:32.944179787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.488015566Z" level=info msg="shim disconnected" id=ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19 namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.488335787Z" level=warning msg="cleaning up after shim disconnected" id=ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19 namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.488651308Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4225]: time="2024-09-10T18:03:33.490343523Z" level=info msg="ignoring event" container=ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.872559495Z" level=info msg="shim disconnected" id=ec5fefa9ec4e4d3e503ab66ef7d7ee0d763b84ac316e50857a591b02760be35a namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.873405352Z" level=warning msg="cleaning up after shim disconnected" id=ec5fefa9ec4e4d3e503ab66ef7d7ee0d763b84ac316e50857a591b02760be35a namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.874702540Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4225]: time="2024-09-10T18:03:33.875848617Z" level=info msg="ignoring event" container=ec5fefa9ec4e4d3e503ab66ef7d7ee0d763b84ac316e50857a591b02760be35a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.888771789Z" level=info msg="shim disconnected" id=6e715701f9a8266eb62e2573ca940b61e284d0488b31fc1de779a3b6a25d7d0c namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.888810191Z" level=warning msg="cleaning up after shim disconnected" id=6e715701f9a8266eb62e2573ca940b61e284d0488b31fc1de779a3b6a25d7d0c namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.888819192Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4225]: time="2024-09-10T18:03:33.889035806Z" level=info msg="ignoring event" container=6e715701f9a8266eb62e2573ca940b61e284d0488b31fc1de779a3b6a25d7d0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:33 functional-879800 dockerd[4225]: time="2024-09-10T18:03:33.917781545Z" level=info msg="ignoring event" container=e3923829bb152a21bde50cf3a7e4abc655b6d8765a7c660bda0afaf4417b6e02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.918141469Z" level=info msg="shim disconnected" id=e3923829bb152a21bde50cf3a7e4abc655b6d8765a7c660bda0afaf4417b6e02 namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.918260377Z" level=warning msg="cleaning up after shim disconnected" id=e3923829bb152a21bde50cf3a7e4abc655b6d8765a7c660bda0afaf4417b6e02 namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.918272078Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.944717661Z" level=info msg="shim disconnected" id=a3e714586dcf6b36e8c5c2f012e6cbfa69737cb99e1aa1416f89384322466b2f namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.944835669Z" level=warning msg="cleaning up after shim disconnected" id=a3e714586dcf6b36e8c5c2f012e6cbfa69737cb99e1aa1416f89384322466b2f namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.944917975Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4225]: time="2024-09-10T18:03:33.949717498Z" level=info msg="ignoring event" container=a3e714586dcf6b36e8c5c2f012e6cbfa69737cb99e1aa1416f89384322466b2f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:33 functional-879800 dockerd[4225]: time="2024-09-10T18:03:33.977057742Z" level=info msg="ignoring event" container=ccc45d35827479fb2a2ef42723180398bed51819259e00c6f8727e2de3d54fbd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.977481670Z" level=info msg="shim disconnected" id=ccc45d35827479fb2a2ef42723180398bed51819259e00c6f8727e2de3d54fbd namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.978397232Z" level=warning msg="cleaning up after shim disconnected" id=ccc45d35827479fb2a2ef42723180398bed51819259e00c6f8727e2de3d54fbd namespace=moby
	Sep 10 18:03:33 functional-879800 dockerd[4232]: time="2024-09-10T18:03:33.978477638Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.000898249Z" level=info msg="shim disconnected" id=07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4 namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.001083962Z" level=warning msg="cleaning up after shim disconnected" id=07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4 namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.001195469Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4225]: time="2024-09-10T18:03:34.002897384Z" level=info msg="ignoring event" container=07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:34 functional-879800 dockerd[4225]: time="2024-09-10T18:03:34.002999991Z" level=info msg="ignoring event" container=976499ca237d00a46e41412bfdd50385936ad33df3496ca98cbb01e91d14a429 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.003118699Z" level=info msg="shim disconnected" id=976499ca237d00a46e41412bfdd50385936ad33df3496ca98cbb01e91d14a429 namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.003189804Z" level=warning msg="cleaning up after shim disconnected" id=976499ca237d00a46e41412bfdd50385936ad33df3496ca98cbb01e91d14a429 namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.003227206Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.018068808Z" level=info msg="shim disconnected" id=5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4225]: time="2024-09-10T18:03:34.018993870Z" level=info msg="ignoring event" container=5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:34 functional-879800 dockerd[4225]: time="2024-09-10T18:03:34.019380696Z" level=info msg="ignoring event" container=d9b410da7bc72151b811c961c3c96f068294a33b4530933fb52e14096ba5a543 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.019830526Z" level=warning msg="cleaning up after shim disconnected" id=5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.019933033Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.025389001Z" level=info msg="shim disconnected" id=d9b410da7bc72151b811c961c3c96f068294a33b4530933fb52e14096ba5a543 namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.027827366Z" level=warning msg="cleaning up after shim disconnected" id=d9b410da7bc72151b811c961c3c96f068294a33b4530933fb52e14096ba5a543 namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.027941873Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.682948754Z" level=info msg="shim disconnected" id=6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.683012358Z" level=warning msg="cleaning up after shim disconnected" id=6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.683023959Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:34 functional-879800 dockerd[4225]: time="2024-09-10T18:03:34.683898418Z" level=info msg="ignoring event" container=6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.930124226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.930790771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.930965183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.935005555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.935089161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.935102662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.935277974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.936674368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.997074142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.997348360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.997478769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:34 functional-879800 dockerd[4232]: time="2024-09-10T18:03:34.999360296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.043805095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.044026110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.044152718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.044348231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.110987827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.111252645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.111371953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.111754779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.396595197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.396764509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.396792111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:35 functional-879800 dockerd[4232]: time="2024-09-10T18:03:35.396943421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.246438750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.247447118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.247714537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.247911850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.490659842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.490759649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.490809152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:38 functional-879800 dockerd[4232]: time="2024-09-10T18:03:38.491318787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:03:39 functional-879800 dockerd[4225]: time="2024-09-10T18:03:39.857401153Z" level=error msg="collecting stats for container /k8s_coredns_coredns-6f6b679f8f-266t8_kube-system_e2d0f1c5-7959-4f05-a592-c427855eb2da_1: invalid id: "
	Sep 10 18:03:39 functional-879800 dockerd[4225]: 2024/09/10 18:03:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Sep 10 18:03:43 functional-879800 dockerd[4225]: time="2024-09-10T18:03:43.336001164Z" level=info msg="ignoring event" container=37ad76e1635d5575d28b041349296f0b4bd7fa4abdadb6ab0882db24c65bc2db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:43 functional-879800 dockerd[4232]: time="2024-09-10T18:03:43.336882124Z" level=info msg="shim disconnected" id=37ad76e1635d5575d28b041349296f0b4bd7fa4abdadb6ab0882db24c65bc2db namespace=moby
	Sep 10 18:03:43 functional-879800 dockerd[4232]: time="2024-09-10T18:03:43.337599372Z" level=warning msg="cleaning up after shim disconnected" id=37ad76e1635d5575d28b041349296f0b4bd7fa4abdadb6ab0882db24c65bc2db namespace=moby
	Sep 10 18:03:43 functional-879800 dockerd[4232]: time="2024-09-10T18:03:43.338251216Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:03:59 functional-879800 dockerd[4225]: time="2024-09-10T18:03:59.091434642Z" level=info msg="ignoring event" container=b547c299d7dd011f02489d4a697d59014bae3219915fe919939c2298c3d0ff72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:03:59 functional-879800 dockerd[4232]: time="2024-09-10T18:03:59.091715562Z" level=info msg="shim disconnected" id=b547c299d7dd011f02489d4a697d59014bae3219915fe919939c2298c3d0ff72 namespace=moby
	Sep 10 18:03:59 functional-879800 dockerd[4232]: time="2024-09-10T18:03:59.091759665Z" level=warning msg="cleaning up after shim disconnected" id=b547c299d7dd011f02489d4a697d59014bae3219915fe919939c2298c3d0ff72 namespace=moby
	Sep 10 18:03:59 functional-879800 dockerd[4232]: time="2024-09-10T18:03:59.091768466Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.349817160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.349968471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.349989472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.350086679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.579090533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.579554466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.579693676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:00 functional-879800 dockerd[4232]: time="2024-09-10T18:04:00.579959195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:01 functional-879800 dockerd[4232]: time="2024-09-10T18:04:01.183864383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:01 functional-879800 dockerd[4232]: time="2024-09-10T18:04:01.184172805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:01 functional-879800 dockerd[4232]: time="2024-09-10T18:04:01.184257111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:01 functional-879800 dockerd[4232]: time="2024-09-10T18:04:01.184545932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.142383799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.142727123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.142745425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.142869734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.149939241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.150201360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.150333870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.150634691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.183608960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.183799773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.183823275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.184056992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.748853453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.749108971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.749218279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:04:04 functional-879800 dockerd[4232]: time="2024-09-10T18:04:04.749397892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:07:22 functional-879800 systemd[1]: Stopping Docker Application Container Engine...
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.352158112Z" level=info msg="Processing signal 'terminated'"
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.454472074Z" level=info msg="ignoring event" container=ca368efebfef8f375661bc1f7feab0ccfe57faf05aab591cc3730852336e3631 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.455346635Z" level=info msg="shim disconnected" id=ca368efebfef8f375661bc1f7feab0ccfe57faf05aab591cc3730852336e3631 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.455998181Z" level=warning msg="cleaning up after shim disconnected" id=ca368efebfef8f375661bc1f7feab0ccfe57faf05aab591cc3730852336e3631 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.456247098Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.489687239Z" level=info msg="ignoring event" container=e1315bf73debfa4232f692c3e0a0b4b3d6ebfeb080aaca3518075c735293b608 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.492412930Z" level=info msg="shim disconnected" id=e1315bf73debfa4232f692c3e0a0b4b3d6ebfeb080aaca3518075c735293b608 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.492515837Z" level=warning msg="cleaning up after shim disconnected" id=e1315bf73debfa4232f692c3e0a0b4b3d6ebfeb080aaca3518075c735293b608 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.492571841Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.518042523Z" level=info msg="ignoring event" container=eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.520561100Z" level=info msg="shim disconnected" id=eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.520614103Z" level=warning msg="cleaning up after shim disconnected" id=eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.520624804Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.540950327Z" level=info msg="ignoring event" container=b91bb2bb949612368ff63dd975a8cddc51be42f2e2fb07f3abcde14dd62fe3f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.541259949Z" level=info msg="shim disconnected" id=b91bb2bb949612368ff63dd975a8cddc51be42f2e2fb07f3abcde14dd62fe3f6 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.541350255Z" level=warning msg="cleaning up after shim disconnected" id=b91bb2bb949612368ff63dd975a8cddc51be42f2e2fb07f3abcde14dd62fe3f6 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.541390758Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.560853320Z" level=info msg="ignoring event" container=f2e8bfe81fcb7f65d7564b425bc1ec750ed7d5f1c976edbcc00f0a02c9798ccd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.561386757Z" level=info msg="shim disconnected" id=f2e8bfe81fcb7f65d7564b425bc1ec750ed7d5f1c976edbcc00f0a02c9798ccd namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.561431160Z" level=warning msg="cleaning up after shim disconnected" id=f2e8bfe81fcb7f65d7564b425bc1ec750ed7d5f1c976edbcc00f0a02c9798ccd namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.561440261Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.579624034Z" level=info msg="ignoring event" container=7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.580145670Z" level=info msg="shim disconnected" id=7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.580207775Z" level=warning msg="cleaning up after shim disconnected" id=7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.580223876Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.613844829Z" level=info msg="shim disconnected" id=f1a968c18d127ecd7f15f98201cf008290b23e40a654359ea1cf579b756ef34f namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.614009341Z" level=warning msg="cleaning up after shim disconnected" id=f1a968c18d127ecd7f15f98201cf008290b23e40a654359ea1cf579b756ef34f namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.614114948Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.615510946Z" level=info msg="ignoring event" container=f1a968c18d127ecd7f15f98201cf008290b23e40a654359ea1cf579b756ef34f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.621018031Z" level=info msg="shim disconnected" id=32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.621070135Z" level=warning msg="cleaning up after shim disconnected" id=32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.621080036Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.621344354Z" level=info msg="ignoring event" container=32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.636675327Z" level=info msg="ignoring event" container=1441db9085178360c9e1eab3a6f7848a379268e43886cd488c51c72a5ca5469e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.652287020Z" level=info msg="shim disconnected" id=1441db9085178360c9e1eab3a6f7848a379268e43886cd488c51c72a5ca5469e namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.652420729Z" level=warning msg="cleaning up after shim disconnected" id=1441db9085178360c9e1eab3a6f7848a379268e43886cd488c51c72a5ca5469e namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.652544838Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.661946796Z" level=info msg="ignoring event" container=8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.664658486Z" level=info msg="shim disconnected" id=8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.664874601Z" level=warning msg="cleaning up after shim disconnected" id=8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.664892402Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.695825967Z" level=info msg="ignoring event" container=70728a77068efe31fc3de9ed6238fa52a23c9b7aa136e603831e0473b2fbe635 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.696489314Z" level=info msg="shim disconnected" id=70728a77068efe31fc3de9ed6238fa52a23c9b7aa136e603831e0473b2fbe635 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.696534617Z" level=warning msg="cleaning up after shim disconnected" id=70728a77068efe31fc3de9ed6238fa52a23c9b7aa136e603831e0473b2fbe635 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.696545618Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4225]: time="2024-09-10T18:07:22.729004290Z" level=info msg="ignoring event" container=9979cd5778faa784311d35679eda1080be9e80af0ae34b2c1bc500498eb5c114 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.735083615Z" level=info msg="shim disconnected" id=9979cd5778faa784311d35679eda1080be9e80af0ae34b2c1bc500498eb5c114 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.736693128Z" level=warning msg="cleaning up after shim disconnected" id=9979cd5778faa784311d35679eda1080be9e80af0ae34b2c1bc500498eb5c114 namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.736835438Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:22 functional-879800 dockerd[4232]: time="2024-09-10T18:07:22.807212964Z" level=warning msg="cleanup warnings time=\"2024-09-10T18:07:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 10 18:07:27 functional-879800 dockerd[4225]: time="2024-09-10T18:07:27.429406593Z" level=info msg="ignoring event" container=bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:27 functional-879800 dockerd[4232]: time="2024-09-10T18:07:27.429688013Z" level=info msg="shim disconnected" id=bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6 namespace=moby
	Sep 10 18:07:27 functional-879800 dockerd[4232]: time="2024-09-10T18:07:27.430046838Z" level=warning msg="cleaning up after shim disconnected" id=bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6 namespace=moby
	Sep 10 18:07:27 functional-879800 dockerd[4232]: time="2024-09-10T18:07:27.430060039Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:32 functional-879800 dockerd[4225]: time="2024-09-10T18:07:32.442204332Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035
	Sep 10 18:07:32 functional-879800 dockerd[4225]: time="2024-09-10T18:07:32.478648728Z" level=info msg="ignoring event" container=95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 18:07:32 functional-879800 dockerd[4232]: time="2024-09-10T18:07:32.479459150Z" level=info msg="shim disconnected" id=95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035 namespace=moby
	Sep 10 18:07:32 functional-879800 dockerd[4232]: time="2024-09-10T18:07:32.479530152Z" level=warning msg="cleaning up after shim disconnected" id=95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035 namespace=moby
	Sep 10 18:07:32 functional-879800 dockerd[4232]: time="2024-09-10T18:07:32.479542052Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 18:07:32 functional-879800 dockerd[4225]: time="2024-09-10T18:07:32.543970812Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 10 18:07:32 functional-879800 dockerd[4225]: time="2024-09-10T18:07:32.544248320Z" level=info msg="Daemon shutdown complete"
	Sep 10 18:07:32 functional-879800 dockerd[4225]: time="2024-09-10T18:07:32.544421724Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 10 18:07:32 functional-879800 dockerd[4225]: time="2024-09-10T18:07:32.544458625Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 10 18:07:33 functional-879800 systemd[1]: docker.service: Deactivated successfully.
	Sep 10 18:07:33 functional-879800 systemd[1]: Stopped Docker Application Container Engine.
	Sep 10 18:07:33 functional-879800 systemd[1]: docker.service: Consumed 9.274s CPU time.
	Sep 10 18:07:33 functional-879800 systemd[1]: Starting Docker Application Container Engine...
	Sep 10 18:07:33 functional-879800 dockerd[8778]: time="2024-09-10T18:07:33.598343792Z" level=info msg="Starting up"
	Sep 10 18:08:33 functional-879800 dockerd[8778]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 10 18:08:33 functional-879800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 10 18:08:33 functional-879800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 10 18:08:33 functional-879800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0910 18:08:33.504279    2616 out.go:270] * 
	W0910 18:08:33.507031    2616 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 18:08:33.510319    2616 out.go:201] 
	
	
	==> Docker <==
	Sep 10 18:11:34 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:11:34Z" level=error msg="error getting RW layer size for container ID '7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:11:34 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:11:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79'"
	Sep 10 18:11:34 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:11:34Z" level=error msg="error getting RW layer size for container ID '8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:11:34 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:11:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7'"
	Sep 10 18:11:34 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:11:34Z" level=error msg="error getting RW layer size for container ID '32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:11:34 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:11:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740'"
	Sep 10 18:11:34 functional-879800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 10 18:11:34 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:11:34Z" level=error msg="error getting RW layer size for container ID '6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:11:34 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:11:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b'"
	Sep 10 18:11:34 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:11:34Z" level=error msg="error getting RW layer size for container ID 'bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:11:34 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:11:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6'"
	Sep 10 18:11:34 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:11:34Z" level=error msg="error getting RW layer size for container ID '37ad76e1635d5575d28b041349296f0b4bd7fa4abdadb6ab0882db24c65bc2db': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/37ad76e1635d5575d28b041349296f0b4bd7fa4abdadb6ab0882db24c65bc2db/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:11:34 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:11:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '37ad76e1635d5575d28b041349296f0b4bd7fa4abdadb6ab0882db24c65bc2db'"
	Sep 10 18:11:34 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:11:34Z" level=error msg="error getting RW layer size for container ID 'ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:11:34 functional-879800 systemd[1]: Failed to start Docker Application Container Engine.
	Sep 10 18:11:34 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:11:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19'"
	Sep 10 18:11:34 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:11:34Z" level=error msg="error getting RW layer size for container ID 'b547c299d7dd011f02489d4a697d59014bae3219915fe919939c2298c3d0ff72': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/b547c299d7dd011f02489d4a697d59014bae3219915fe919939c2298c3d0ff72/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:11:34 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:11:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'b547c299d7dd011f02489d4a697d59014bae3219915fe919939c2298c3d0ff72'"
	Sep 10 18:11:34 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:11:34Z" level=error msg="error getting RW layer size for container ID 'eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:11:34 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:11:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426'"
	Sep 10 18:11:34 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:11:34Z" level=error msg="error getting RW layer size for container ID '07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:11:34 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:11:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4'"
	Sep 10 18:11:34 functional-879800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Sep 10 18:11:34 functional-879800 systemd[1]: Stopped Docker Application Container Engine.
	Sep 10 18:11:34 functional-879800 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-09-10T18:11:36Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep10 18:02] kauditd_printk_skb: 10 callbacks suppressed
	[Sep10 18:03] systemd-fstab-generator[3740]: Ignoring "noauto" option for root device
	[  +0.537677] systemd-fstab-generator[3792]: Ignoring "noauto" option for root device
	[  +0.225711] systemd-fstab-generator[3804]: Ignoring "noauto" option for root device
	[  +0.257523] systemd-fstab-generator[3818]: Ignoring "noauto" option for root device
	[  +5.340147] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.877359] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.186798] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.179954] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.256144] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +1.037441] systemd-fstab-generator[4763]: Ignoring "noauto" option for root device
	[  +3.061765] kauditd_printk_skb: 214 callbacks suppressed
	[  +8.635556] kauditd_printk_skb: 31 callbacks suppressed
	[  +1.985505] systemd-fstab-generator[6074]: Ignoring "noauto" option for root device
	[ +13.788298] kauditd_printk_skb: 17 callbacks suppressed
	[Sep10 18:04] kauditd_printk_skb: 6 callbacks suppressed
	[ +16.694005] systemd-fstab-generator[6726]: Ignoring "noauto" option for root device
	[  +0.162430] kauditd_printk_skb: 23 callbacks suppressed
	[Sep10 18:06] hrtimer: interrupt took 3009709 ns
	[Sep10 18:07] systemd-fstab-generator[8301]: Ignoring "noauto" option for root device
	[  +0.146006] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.426119] systemd-fstab-generator[8351]: Ignoring "noauto" option for root device
	[  +0.222754] systemd-fstab-generator[8363]: Ignoring "noauto" option for root device
	[  +0.259233] systemd-fstab-generator[8377]: Ignoring "noauto" option for root device
	[  +5.236878] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 18:12:34 up 13 min,  0 users,  load average: 0.33, 0.26, 0.24
	Linux functional-879800 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.667080    6081 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.668213    6081 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.668335    6081 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.668509    6081 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: I0910 18:12:34.668687    6081 setters.go:600] "Node became not ready" node="functional-879800" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-09-10T18:12:34Z","lastTransitionTime":"2024-09-10T18:12:34Z","reason":"KubeletNotReady","message":"[container runtime is down, PLEG is not healthy: pleg was last seen active 5m13.19808861s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @-\u003e/var/run/docker.sock: read: connection reset by peer]"}
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.670903    6081 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.676050    6081 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: I0910 18:12:34.676138    6081 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.674901    6081 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.676272    6081 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.674802    6081 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.676407    6081 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.674844    6081 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.676434    6081 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.675651    6081 kubelet.go:2911] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.676929    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:12:34Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:12:34Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:12:34Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:12:34Z\\\",\\\"lastTransitionTime\\\":\\\"2024-09-10T18:12:34Z\\\",\\\"message\\\":\\\"[container runtime is down, PLEG is not healthy: pleg was last seen active 5m13.19808861s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to ge
t docker version: failed to get docker version from dockerd: error during connect: Get \\\\\\\"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\\\\\\\": read unix @-\\\\u003e/var/run/docker.sock: read: connection reset by peer]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://Unknown\\\"}}}\" for node \"functional-879800\": Patch \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800/status?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.678991    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.680029    6081 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.680965    6081 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.681089    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.681350    6081 kubelet.go:1446] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.683831    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.687652    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.687799    6081 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count"
	Sep 10 18:12:34 functional-879800 kubelet[6081]: E0910 18:12:34.773659    6081 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:11:34.154388   11480 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:11:34.183123   11480 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:11:34.210842   11480 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:11:34.238499   11480 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:11:34.264891   11480 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:11:34.290875   11480 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:11:34.318978   11480 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:11:34.345939   11480 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-879800 -n functional-879800
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-879800 -n functional-879800: exit status 2 (10.4837705s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-879800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (120.72s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.22s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-879800 apply -f testdata\invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-879800 apply -f testdata\invalidsvc.yaml: exit status 1 (4.2153912s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\invalidsvc.yaml": error validating data: failed to download openapi: Get "https://172.31.208.92:8441/openapi/v2?timeout=32s": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-879800 apply -f testdata\invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (127.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-879800 status: exit status 2 (12.5432959s)

                                                
                                                
-- stdout --
	functional-879800
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-windows-amd64.exe -p functional-879800 status" : exit status 2
functional_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-879800 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (11.6372422s)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-windows-amd64.exe -p functional-879800 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:872: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-879800 status -o json: exit status 2 (11.0915648s)

                                                
                                                
-- stdout --
	{"Name":"functional-879800","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-windows-amd64.exe -p functional-879800 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-879800 -n functional-879800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-879800 -n functional-879800: exit status 2 (11.2316191s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 logs -n 25: (1m9.8812699s)
helpers_test.go:252: TestFunctional/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	|-----------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|  Command  |                                                Args                                                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|-----------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| cache     | functional-879800 cache reload                                                                      | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	| ssh       | functional-879800 ssh                                                                               | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|           | sudo crictl inspecti                                                                                |                   |                   |         |                     |                     |
	|           | registry.k8s.io/pause:latest                                                                        |                   |                   |         |                     |                     |
	| cache     | delete                                                                                              | minikube          | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|           | registry.k8s.io/pause:3.1                                                                           |                   |                   |         |                     |                     |
	| cache     | delete                                                                                              | minikube          | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|           | registry.k8s.io/pause:latest                                                                        |                   |                   |         |                     |                     |
	| kubectl   | functional-879800 kubectl --                                                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:05 UTC | 10 Sep 24 18:05 UTC |
	|           | --context functional-879800                                                                         |                   |                   |         |                     |                     |
	|           | get pods                                                                                            |                   |                   |         |                     |                     |
	| start     | -p functional-879800                                                                                | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:06 UTC |                     |
	|           | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision                            |                   |                   |         |                     |                     |
	|           | --wait=all                                                                                          |                   |                   |         |                     |                     |
	| cp        | functional-879800 cp                                                                                | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC | 10 Sep 24 18:17 UTC |
	|           | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|           | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| config    | functional-879800 config unset                                                                      | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC | 10 Sep 24 18:17 UTC |
	|           | cpus                                                                                                |                   |                   |         |                     |                     |
	| config    | functional-879800 config get                                                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC |                     |
	|           | cpus                                                                                                |                   |                   |         |                     |                     |
	| config    | functional-879800 config set                                                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC | 10 Sep 24 18:17 UTC |
	|           | cpus 2                                                                                              |                   |                   |         |                     |                     |
	| config    | functional-879800 config get                                                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC | 10 Sep 24 18:17 UTC |
	|           | cpus                                                                                                |                   |                   |         |                     |                     |
	| config    | functional-879800 config unset                                                                      | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC | 10 Sep 24 18:17 UTC |
	|           | cpus                                                                                                |                   |                   |         |                     |                     |
	| config    | functional-879800 config get                                                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC |                     |
	|           | cpus                                                                                                |                   |                   |         |                     |                     |
	| start     | -p functional-879800                                                                                | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC |                     |
	|           | --dry-run --memory                                                                                  |                   |                   |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                             |                   |                   |         |                     |                     |
	|           | --driver=hyperv                                                                                     |                   |                   |         |                     |                     |
	| start     | -p functional-879800                                                                                | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC |                     |
	|           | --dry-run --memory                                                                                  |                   |                   |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                             |                   |                   |         |                     |                     |
	|           | --driver=hyperv                                                                                     |                   |                   |         |                     |                     |
	| ssh       | functional-879800 ssh -n                                                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC | 10 Sep 24 18:17 UTC |
	|           | functional-879800 sudo cat                                                                          |                   |                   |         |                     |                     |
	|           | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| dashboard | --url --port 36195                                                                                  | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC |                     |
	|           | -p functional-879800                                                                                |                   |                   |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                              |                   |                   |         |                     |                     |
	| cp        | functional-879800 cp functional-879800:/home/docker/cp-test.txt                                     | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC | 10 Sep 24 18:18 UTC |
	|           | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd1785398597\001\cp-test.txt |                   |                   |         |                     |                     |
	| ssh       | functional-879800 ssh -n                                                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|           | functional-879800 sudo cat                                                                          |                   |                   |         |                     |                     |
	|           | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| ssh       | functional-879800 ssh echo                                                                          | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|           | hello                                                                                               |                   |                   |         |                     |                     |
	| cp        | functional-879800 cp                                                                                | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|           | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|           | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| ssh       | functional-879800 ssh cat                                                                           | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|           | /etc/hostname                                                                                       |                   |                   |         |                     |                     |
	| ssh       | functional-879800 ssh -n                                                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC |                     |
	|           | functional-879800 sudo cat                                                                          |                   |                   |         |                     |                     |
	|           | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| tunnel    | functional-879800 tunnel                                                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC |                     |
	|           | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| tunnel    | functional-879800 tunnel                                                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC |                     |
	|           | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	|-----------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:17:46
	Running on machine: minikube5
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:17:46.853450    8216 out.go:345] Setting OutFile to fd 1176 ...
	I0910 18:17:46.915440    8216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:17:46.915440    8216 out.go:358] Setting ErrFile to fd 1180...
	I0910 18:17:46.915440    8216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:17:46.934444    8216 out.go:352] Setting JSON to false
	I0910 18:17:46.938978    8216 start.go:129] hostinfo: {"hostname":"minikube5","uptime":103530,"bootTime":1725888736,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4842 Build 19045.4842","kernelVersion":"10.0.19045.4842 Build 19045.4842","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0910 18:17:46.938978    8216 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 18:17:46.945886    8216 out.go:177] * [functional-879800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	I0910 18:17:46.948694    8216 notify.go:220] Checking for updates...
	I0910 18:17:46.950952    8216 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:17:46.953367    8216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:17:46.955919    8216 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0910 18:17:46.958496    8216 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:17:46.961649    8216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	
	==> Docker <==
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="error getting RW layer size for container ID '95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID '95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035'"
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="error getting RW layer size for container ID 'd9b410da7bc72151b811c961c3c96f068294a33b4530933fb52e14096ba5a543': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/d9b410da7bc72151b811c961c3c96f068294a33b4530933fb52e14096ba5a543/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="error getting RW layer size for container ID 'eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426'"
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="error getting RW layer size for container ID '5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead'"
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="error getting RW layer size for container ID '07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID '07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4'"
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="error getting RW layer size for container ID '7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79'"
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="error getting RW layer size for container ID 'ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19'"
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="error getting RW layer size for container ID '8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7'"
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="error getting RW layer size for container ID '32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID '32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740'"
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="error getting RW layer size for container ID 'b547c299d7dd011f02489d4a697d59014bae3219915fe919939c2298c3d0ff72': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/b547c299d7dd011f02489d4a697d59014bae3219915fe919939c2298c3d0ff72/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'b547c299d7dd011f02489d4a697d59014bae3219915fe919939c2298c3d0ff72'"
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="error getting RW layer size for container ID '6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b'"
	Sep 10 18:18:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:18:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd9b410da7bc72151b811c961c3c96f068294a33b4530933fb52e14096ba5a543'"
	Sep 10 18:18:36 functional-879800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
	Sep 10 18:18:36 functional-879800 systemd[1]: Stopped Docker Application Container Engine.
	Sep 10 18:18:36 functional-879800 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-09-10T18:18:38Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep10 18:02] kauditd_printk_skb: 10 callbacks suppressed
	[Sep10 18:03] systemd-fstab-generator[3740]: Ignoring "noauto" option for root device
	[  +0.537677] systemd-fstab-generator[3792]: Ignoring "noauto" option for root device
	[  +0.225711] systemd-fstab-generator[3804]: Ignoring "noauto" option for root device
	[  +0.257523] systemd-fstab-generator[3818]: Ignoring "noauto" option for root device
	[  +5.340147] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.877359] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.186798] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.179954] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.256144] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +1.037441] systemd-fstab-generator[4763]: Ignoring "noauto" option for root device
	[  +3.061765] kauditd_printk_skb: 214 callbacks suppressed
	[  +8.635556] kauditd_printk_skb: 31 callbacks suppressed
	[  +1.985505] systemd-fstab-generator[6074]: Ignoring "noauto" option for root device
	[ +13.788298] kauditd_printk_skb: 17 callbacks suppressed
	[Sep10 18:04] kauditd_printk_skb: 6 callbacks suppressed
	[ +16.694005] systemd-fstab-generator[6726]: Ignoring "noauto" option for root device
	[  +0.162430] kauditd_printk_skb: 23 callbacks suppressed
	[Sep10 18:06] hrtimer: interrupt took 3009709 ns
	[Sep10 18:07] systemd-fstab-generator[8301]: Ignoring "noauto" option for root device
	[  +0.146006] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.426119] systemd-fstab-generator[8351]: Ignoring "noauto" option for root device
	[  +0.222754] systemd-fstab-generator[8363]: Ignoring "noauto" option for root device
	[  +0.259233] systemd-fstab-generator[8377]: Ignoring "noauto" option for root device
	[  +5.236878] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 18:19:36 up 20 min,  0 users,  load average: 0.04, 0.08, 0.15
	Linux functional-879800 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.444483    6081 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.444509    6081 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: I0910 18:19:36.444522    6081 image_gc_manager.go:214] "Failed to monitor images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.444547    6081 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.444559    6081 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: I0910 18:19:36.444569    6081 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.444638    6081 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.444666    6081 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.444687    6081 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.444704    6081 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.444809    6081 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.444876    6081 kubelet.go:2911] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.444903    6081 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: I0910 18:19:36.444967    6081 setters.go:600] "Node became not ready" node="functional-879800" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-09-10T18:19:36Z","lastTransitionTime":"2024-09-10T18:19:36Z","reason":"KubeletNotReady","message":"[container runtime is down, PLEG is not healthy: pleg was last seen active 12m14.974397519s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"}
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.446487    6081 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.446517    6081 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.447061    6081 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.447091    6081 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.447370    6081 kubelet.go:1446] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.448016    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:19:36Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:19:36Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:19:36Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:19:36Z\\\",\\\"lastTransitionTime\\\":\\\"2024-09-10T18:19:36Z\\\",\\\"message\\\":\\\"[container runtime is down, PLEG is not healthy: pleg was last seen active 12m14.974397519s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to
get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://Unknown\\\"}}}\" for node \"functional-879800\": Patch \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800/status?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.450135    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.452533    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.453716    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.455402    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:19:36 functional-879800 kubelet[6081]: E0910 18:19:36.455449    6081 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:18:35.860141    7340 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:18:35.904867    7340 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:18:35.940268    7340 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:18:35.971588    7340 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:18:36.002807    7340 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:18:36.033024    7340 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:18:36.064741    7340 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:18:36.094684    7340 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-879800 -n functional-879800
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-879800 -n functional-879800: exit status 2 (10.6268705s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-879800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (127.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (181.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-879800 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1629: (dbg) Non-zero exit: kubectl --context functional-879800 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8: exit status 1 (2.1479807s)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://172.31.208.92:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-879800 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-879800 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-879800 describe po hello-node-connect: exit status 1 (2.1458779s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-879800 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-879800 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-879800 logs -l app=hello-node-connect: exit status 1 (2.169856s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-879800 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-879800 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-879800 describe svc hello-node-connect: exit status 1 (2.1654049s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-879800 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-879800 -n functional-879800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-879800 -n functional-879800: exit status 2 (10.7308202s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 logs -n 25: (2m31.0080833s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|------------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|  Command   |                                                Args                                                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|------------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| dashboard  | --url --port 36195                                                                                  | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC |                     |
	|            | -p functional-879800                                                                                |                   |                   |         |                     |                     |
	|            | --alsologtostderr -v=1                                                                              |                   |                   |         |                     |                     |
	| cp         | functional-879800 cp functional-879800:/home/docker/cp-test.txt                                     | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC | 10 Sep 24 18:18 UTC |
	|            | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd1785398597\001\cp-test.txt |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh -n                                                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|            | functional-879800 sudo cat                                                                          |                   |                   |         |                     |                     |
	|            | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh echo                                                                          | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|            | hello                                                                                               |                   |                   |         |                     |                     |
	| cp         | functional-879800 cp                                                                                | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|            | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|            | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh cat                                                                           | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|            | /etc/hostname                                                                                       |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh -n                                                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|            | functional-879800 sudo cat                                                                          |                   |                   |         |                     |                     |
	|            | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| tunnel     | functional-879800 tunnel                                                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC |                     |
	|            | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| tunnel     | functional-879800 tunnel                                                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC |                     |
	|            | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| addons     | functional-879800 addons list                                                                       | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	| addons     | functional-879800 addons list                                                                       | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|            | -o json                                                                                             |                   |                   |         |                     |                     |
	| license    |                                                                                                     | minikube          | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	| ssh        | functional-879800 ssh sudo                                                                          | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC |                     |
	|            | systemctl is-active crio                                                                            |                   |                   |         |                     |                     |
	| tunnel     | functional-879800 tunnel                                                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC |                     |
	|            | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| image      | functional-879800 image load --daemon                                                               | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:19 UTC |
	|            | kicbase/echo-server:functional-879800                                                               |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| image      | functional-879800 image ls                                                                          | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:19 UTC | 10 Sep 24 18:20 UTC |
	| ssh        | functional-879800 ssh sudo cat                                                                      | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:19 UTC | 10 Sep 24 18:19 UTC |
	|            | /etc/ssl/certs/4724.pem                                                                             |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh sudo cat                                                                      | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:19 UTC | 10 Sep 24 18:20 UTC |
	|            | /usr/share/ca-certificates/4724.pem                                                                 |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh sudo cat                                                                      | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|            | /etc/ssl/certs/51391683.0                                                                           |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh sudo cat                                                                      | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|            | /etc/ssl/certs/47242.pem                                                                            |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh sudo cat                                                                      | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|            | /usr/share/ca-certificates/47242.pem                                                                |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh sudo cat                                                                      | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|            | /etc/ssl/certs/3ec20f2e.0                                                                           |                   |                   |         |                     |                     |
	| image      | functional-879800 image load --daemon                                                               | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:21 UTC |
	|            | kicbase/echo-server:functional-879800                                                               |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| docker-env | functional-879800 docker-env                                                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:20 UTC |                     |
	| image      | functional-879800 image ls                                                                          | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:21 UTC |                     |
	|------------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:17:46
	Running on machine: minikube5
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:17:46.853450    8216 out.go:345] Setting OutFile to fd 1176 ...
	I0910 18:17:46.915440    8216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:17:46.915440    8216 out.go:358] Setting ErrFile to fd 1180...
	I0910 18:17:46.915440    8216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:17:46.934444    8216 out.go:352] Setting JSON to false
	I0910 18:17:46.938978    8216 start.go:129] hostinfo: {"hostname":"minikube5","uptime":103530,"bootTime":1725888736,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4842 Build 19045.4842","kernelVersion":"10.0.19045.4842 Build 19045.4842","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0910 18:17:46.938978    8216 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 18:17:46.945886    8216 out.go:177] * [functional-879800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	I0910 18:17:46.948694    8216 notify.go:220] Checking for updates...
	I0910 18:17:46.950952    8216 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:17:46.953367    8216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:17:46.955919    8216 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0910 18:17:46.958496    8216 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:17:46.961649    8216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	
	==> Docker <==
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b'"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="error getting RW layer size for container ID '8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7'"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="error getting RW layer size for container ID '95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035'"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="error getting RW layer size for container ID 'ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19'"
	Sep 10 18:23:37 functional-879800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="error getting RW layer size for container ID '07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4'"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="error getting RW layer size for container ID '5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead'"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="error getting RW layer size for container ID '9979cd5778faa784311d35679eda1080be9e80af0ae34b2c1bc500498eb5c114': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/9979cd5778faa784311d35679eda1080be9e80af0ae34b2c1bc500498eb5c114/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:23:37 functional-879800 systemd[1]: Failed to start Docker Application Container Engine.
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9979cd5778faa784311d35679eda1080be9e80af0ae34b2c1bc500498eb5c114'"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="error getting RW layer size for container ID '32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740'"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="error getting RW layer size for container ID 'bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6'"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="error getting RW layer size for container ID 'eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426'"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="error getting RW layer size for container ID '37ad76e1635d5575d28b041349296f0b4bd7fa4abdadb6ab0882db24c65bc2db': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/37ad76e1635d5575d28b041349296f0b4bd7fa4abdadb6ab0882db24c65bc2db/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '37ad76e1635d5575d28b041349296f0b4bd7fa4abdadb6ab0882db24c65bc2db'"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-09-10T18:23:37Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +5.340147] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.877359] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.186798] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.179954] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.256144] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +1.037441] systemd-fstab-generator[4763]: Ignoring "noauto" option for root device
	[  +3.061765] kauditd_printk_skb: 214 callbacks suppressed
	[  +8.635556] kauditd_printk_skb: 31 callbacks suppressed
	[  +1.985505] systemd-fstab-generator[6074]: Ignoring "noauto" option for root device
	[ +13.788298] kauditd_printk_skb: 17 callbacks suppressed
	[Sep10 18:04] kauditd_printk_skb: 6 callbacks suppressed
	[ +16.694005] systemd-fstab-generator[6726]: Ignoring "noauto" option for root device
	[  +0.162430] kauditd_printk_skb: 23 callbacks suppressed
	[Sep10 18:06] hrtimer: interrupt took 3009709 ns
	[Sep10 18:07] systemd-fstab-generator[8301]: Ignoring "noauto" option for root device
	[  +0.146006] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.426119] systemd-fstab-generator[8351]: Ignoring "noauto" option for root device
	[  +0.222754] systemd-fstab-generator[8363]: Ignoring "noauto" option for root device
	[  +0.259233] systemd-fstab-generator[8377]: Ignoring "noauto" option for root device
	[  +5.236878] kauditd_printk_skb: 89 callbacks suppressed
	[Sep10 18:20] systemd-fstab-generator[12847]: Ignoring "noauto" option for root device
	[Sep10 18:21] systemd-fstab-generator[13028]: Ignoring "noauto" option for root device
	[  +0.120959] kauditd_printk_skb: 12 callbacks suppressed
	[Sep10 18:24] systemd-fstab-generator[14069]: Ignoring "noauto" option for root device
	[  +0.111028] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:24:38 up 25 min,  0 users,  load average: 0.00, 0.02, 0.09
	Linux functional-879800 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.915816    6081 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: I0910 18:24:37.916433    6081 setters.go:600] "Node became not ready" node="functional-879800" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-09-10T18:24:37Z","lastTransitionTime":"2024-09-10T18:24:37Z","reason":"KubeletNotReady","message":"[container runtime is down, PLEG is not healthy: pleg was last seen active 17m16.445857639s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @-\u003e/var/run/docker.sock: read: connection reset by peer]"}
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.917392    6081 kubelet.go:2911] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.916331    6081 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.917518    6081 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.919093    6081 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.919229    6081 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.919301    6081 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.919360    6081 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.919643    6081 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.920351    6081 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.920442    6081 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.916258    6081 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.921099    6081 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: I0910 18:24:37.921120    6081 image_gc_manager.go:214] "Failed to monitor images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.916313    6081 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.921147    6081 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: I0910 18:24:37.921160    6081 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.921259    6081 kubelet.go:1446] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.922340    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:24:37Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:24:37Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:24:37Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2024-09-10T18:24:37Z\\\",\\\"message\\\":\\\"[container runtime is down, PLEG is not healthy: pleg was last seen active 17m16.445857639s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to
get docker version: failed to get docker version from dockerd: error during connect: Get \\\\\\\"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\\\\\\\": read unix @-\\\\u003e/var/run/docker.sock: read: connection reset by peer]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://Unknown\\\"}}}\" for node \"functional-879800\": Patch \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800/status?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.927624    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.934887    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.936109    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.937085    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.937149    6081 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:22:37.116651     772 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:22:37.172879     772 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:22:37.218875     772 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:22:37.249888     772 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:22:37.285874     772 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:22:37.326890     772 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:22:37.367559     772 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:23:37.481595     772 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-879800 -n functional-879800
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-879800 -n functional-879800: exit status 2 (11.0441624s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-879800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (181.45s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (409.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
E0910 18:19:10.474984    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.31.208.92:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": context deadline exceeded
functional_test_pvc_test.go:44: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-879800 -n functional-879800
functional_test_pvc_test.go:44: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-879800 -n functional-879800: exit status 2 (10.8322304s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:44: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:44: "functional-879800" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-879800 -n functional-879800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-879800 -n functional-879800: exit status 2 (10.3524031s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 logs -n 25: (2m17.3938429s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|------------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|  Command   |                                                Args                                                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|------------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| dashboard  | --url --port 36195                                                                                  | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC |                     |
	|            | -p functional-879800                                                                                |                   |                   |         |                     |                     |
	|            | --alsologtostderr -v=1                                                                              |                   |                   |         |                     |                     |
	| cp         | functional-879800 cp functional-879800:/home/docker/cp-test.txt                                     | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC | 10 Sep 24 18:18 UTC |
	|            | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd1785398597\001\cp-test.txt |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh -n                                                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|            | functional-879800 sudo cat                                                                          |                   |                   |         |                     |                     |
	|            | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh echo                                                                          | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|            | hello                                                                                               |                   |                   |         |                     |                     |
	| cp         | functional-879800 cp                                                                                | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|            | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|            | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh cat                                                                           | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|            | /etc/hostname                                                                                       |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh -n                                                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|            | functional-879800 sudo cat                                                                          |                   |                   |         |                     |                     |
	|            | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| tunnel     | functional-879800 tunnel                                                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC |                     |
	|            | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| tunnel     | functional-879800 tunnel                                                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC |                     |
	|            | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| addons     | functional-879800 addons list                                                                       | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	| addons     | functional-879800 addons list                                                                       | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|            | -o json                                                                                             |                   |                   |         |                     |                     |
	| license    |                                                                                                     | minikube          | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	| ssh        | functional-879800 ssh sudo                                                                          | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC |                     |
	|            | systemctl is-active crio                                                                            |                   |                   |         |                     |                     |
	| tunnel     | functional-879800 tunnel                                                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC |                     |
	|            | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| image      | functional-879800 image load --daemon                                                               | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:19 UTC |
	|            | kicbase/echo-server:functional-879800                                                               |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| image      | functional-879800 image ls                                                                          | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:19 UTC | 10 Sep 24 18:20 UTC |
	| ssh        | functional-879800 ssh sudo cat                                                                      | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:19 UTC | 10 Sep 24 18:19 UTC |
	|            | /etc/ssl/certs/4724.pem                                                                             |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh sudo cat                                                                      | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:19 UTC | 10 Sep 24 18:20 UTC |
	|            | /usr/share/ca-certificates/4724.pem                                                                 |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh sudo cat                                                                      | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|            | /etc/ssl/certs/51391683.0                                                                           |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh sudo cat                                                                      | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|            | /etc/ssl/certs/47242.pem                                                                            |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh sudo cat                                                                      | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|            | /usr/share/ca-certificates/47242.pem                                                                |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh sudo cat                                                                      | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|            | /etc/ssl/certs/3ec20f2e.0                                                                           |                   |                   |         |                     |                     |
	| image      | functional-879800 image load --daemon                                                               | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:21 UTC |
	|            | kicbase/echo-server:functional-879800                                                               |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| docker-env | functional-879800 docker-env                                                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:20 UTC |                     |
	| image      | functional-879800 image ls                                                                          | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:21 UTC |                     |
	|------------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:17:46
	Running on machine: minikube5
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:17:46.853450    8216 out.go:345] Setting OutFile to fd 1176 ...
	I0910 18:17:46.915440    8216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:17:46.915440    8216 out.go:358] Setting ErrFile to fd 1180...
	I0910 18:17:46.915440    8216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:17:46.934444    8216 out.go:352] Setting JSON to false
	I0910 18:17:46.938978    8216 start.go:129] hostinfo: {"hostname":"minikube5","uptime":103530,"bootTime":1725888736,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4842 Build 19045.4842","kernelVersion":"10.0.19045.4842 Build 19045.4842","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0910 18:17:46.938978    8216 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 18:17:46.945886    8216 out.go:177] * [functional-879800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	I0910 18:17:46.948694    8216 notify.go:220] Checking for updates...
	I0910 18:17:46.950952    8216 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:17:46.953367    8216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:17:46.955919    8216 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0910 18:17:46.958496    8216 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:17:46.961649    8216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	
	==> Docker <==
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="error getting RW layer size for container ID '95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035'"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="error getting RW layer size for container ID 'ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19'"
	Sep 10 18:23:37 functional-879800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="error getting RW layer size for container ID '07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4'"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="error getting RW layer size for container ID '5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead'"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="error getting RW layer size for container ID '9979cd5778faa784311d35679eda1080be9e80af0ae34b2c1bc500498eb5c114': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/9979cd5778faa784311d35679eda1080be9e80af0ae34b2c1bc500498eb5c114/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:23:37 functional-879800 systemd[1]: Failed to start Docker Application Container Engine.
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9979cd5778faa784311d35679eda1080be9e80af0ae34b2c1bc500498eb5c114'"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="error getting RW layer size for container ID '32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740'"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="error getting RW layer size for container ID 'bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6'"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="error getting RW layer size for container ID 'eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426'"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="error getting RW layer size for container ID '37ad76e1635d5575d28b041349296f0b4bd7fa4abdadb6ab0882db24c65bc2db': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/37ad76e1635d5575d28b041349296f0b4bd7fa4abdadb6ab0882db24c65bc2db/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:23:37 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:23:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '37ad76e1635d5575d28b041349296f0b4bd7fa4abdadb6ab0882db24c65bc2db'"
	Sep 10 18:23:37 functional-879800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Sep 10 18:23:37 functional-879800 systemd[1]: Stopped Docker Application Container Engine.
	Sep 10 18:23:37 functional-879800 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-09-10T18:23:39Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +5.340147] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.877359] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.186798] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.179954] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.256144] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +1.037441] systemd-fstab-generator[4763]: Ignoring "noauto" option for root device
	[  +3.061765] kauditd_printk_skb: 214 callbacks suppressed
	[  +8.635556] kauditd_printk_skb: 31 callbacks suppressed
	[  +1.985505] systemd-fstab-generator[6074]: Ignoring "noauto" option for root device
	[ +13.788298] kauditd_printk_skb: 17 callbacks suppressed
	[Sep10 18:04] kauditd_printk_skb: 6 callbacks suppressed
	[ +16.694005] systemd-fstab-generator[6726]: Ignoring "noauto" option for root device
	[  +0.162430] kauditd_printk_skb: 23 callbacks suppressed
	[Sep10 18:06] hrtimer: interrupt took 3009709 ns
	[Sep10 18:07] systemd-fstab-generator[8301]: Ignoring "noauto" option for root device
	[  +0.146006] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.426119] systemd-fstab-generator[8351]: Ignoring "noauto" option for root device
	[  +0.222754] systemd-fstab-generator[8363]: Ignoring "noauto" option for root device
	[  +0.259233] systemd-fstab-generator[8377]: Ignoring "noauto" option for root device
	[  +5.236878] kauditd_printk_skb: 89 callbacks suppressed
	[Sep10 18:20] systemd-fstab-generator[12847]: Ignoring "noauto" option for root device
	[Sep10 18:21] systemd-fstab-generator[13028]: Ignoring "noauto" option for root device
	[  +0.120959] kauditd_printk_skb: 12 callbacks suppressed
	[Sep10 18:24] systemd-fstab-generator[14069]: Ignoring "noauto" option for root device
	[  +0.111028] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:24:38 up 25 min,  0 users,  load average: 0.00, 0.02, 0.09
	Linux functional-879800 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.915816    6081 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: I0910 18:24:37.916433    6081 setters.go:600] "Node became not ready" node="functional-879800" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-09-10T18:24:37Z","lastTransitionTime":"2024-09-10T18:24:37Z","reason":"KubeletNotReady","message":"[container runtime is down, PLEG is not healthy: pleg was last seen active 17m16.445857639s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @-\u003e/var/run/docker.sock: read: connection reset by peer]"}
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.917392    6081 kubelet.go:2911] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.916331    6081 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.917518    6081 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.919093    6081 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.919229    6081 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.919301    6081 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.919360    6081 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.919643    6081 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.920351    6081 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.920442    6081 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.916258    6081 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.921099    6081 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: I0910 18:24:37.921120    6081 image_gc_manager.go:214] "Failed to monitor images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.916313    6081 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.921147    6081 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: I0910 18:24:37.921160    6081 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.921259    6081 kubelet.go:1446] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.922340    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:24:37Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:24:37Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:24:37Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2024-09-10T18:24:37Z\\\",\\\"message\\\":\\\"[container runtime is down, PLEG is not healthy: pleg was last seen active 17m16.445857639s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to
get docker version: failed to get docker version from dockerd: error during connect: Get \\\\\\\"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\\\\\\\": read unix @-\\\\u003e/var/run/docker.sock: read: connection reset by peer]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://Unknown\\\"}}}\" for node \"functional-879800\": Patch \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800/status?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.927624    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.934887    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.936109    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.937085    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:24:37 functional-879800 kubelet[6081]: E0910 18:24:37.937149    6081 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:22:37.118106    5628 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:22:37.155938    5628 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:22:37.205875    5628 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:22:37.237866    5628 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:22:37.273877    5628 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:22:37.316882    5628 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:22:37.349879    5628 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:23:37.599023    5628 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	error during connect: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.47/containers/json?all=1&filters=%7B%22name%22%3A%7B%22k8s_storage-provisioner%22%3Atrue%7D%7D": read unix @->/run/docker.sock: read: connection reset by peer

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-879800 -n functional-879800
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-879800 -n functional-879800: exit status 2 (11.0452461s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-879800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (409.64s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (240.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-879800 replace --force -f testdata\mysql.yaml
functional_test.go:1793: (dbg) Non-zero exit: kubectl --context functional-879800 replace --force -f testdata\mysql.yaml: exit status 1 (4.2485717s)

                                                
                                                
** stderr ** 
	error when deleting "testdata\\mysql.yaml": Delete "https://172.31.208.92:8441/api/v1/namespaces/default/services/mysql": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
	error when deleting "testdata\\mysql.yaml": Delete "https://172.31.208.92:8441/apis/apps/v1/namespaces/default/deployments/mysql": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1795: failed to kubectl replace mysql: args "kubectl --context functional-879800 replace --force -f testdata\\mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-879800 -n functional-879800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-879800 -n functional-879800: exit status 2 (10.6728397s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 logs -n 25: (3m34.8626256s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|------------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|  Command   |                                 Args                                  |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|------------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ssh        | functional-879800 ssh cat                                             | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|            | /etc/hostname                                                         |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh -n                                              | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|            | functional-879800 sudo cat                                            |                   |                   |         |                     |                     |
	|            | /tmp/does/not/exist/cp-test.txt                                       |                   |                   |         |                     |                     |
	| tunnel     | functional-879800 tunnel                                              | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC |                     |
	|            | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| tunnel     | functional-879800 tunnel                                              | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC |                     |
	|            | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| addons     | functional-879800 addons list                                         | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	| addons     | functional-879800 addons list                                         | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|            | -o json                                                               |                   |                   |         |                     |                     |
	| license    |                                                                       | minikube          | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	| ssh        | functional-879800 ssh sudo                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC |                     |
	|            | systemctl is-active crio                                              |                   |                   |         |                     |                     |
	| tunnel     | functional-879800 tunnel                                              | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC |                     |
	|            | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| image      | functional-879800 image load --daemon                                 | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:19 UTC |
	|            | kicbase/echo-server:functional-879800                                 |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| image      | functional-879800 image ls                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:19 UTC | 10 Sep 24 18:20 UTC |
	| ssh        | functional-879800 ssh sudo cat                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:19 UTC | 10 Sep 24 18:19 UTC |
	|            | /etc/ssl/certs/4724.pem                                               |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh sudo cat                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:19 UTC | 10 Sep 24 18:20 UTC |
	|            | /usr/share/ca-certificates/4724.pem                                   |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh sudo cat                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|            | /etc/ssl/certs/51391683.0                                             |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh sudo cat                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|            | /etc/ssl/certs/47242.pem                                              |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh sudo cat                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|            | /usr/share/ca-certificates/47242.pem                                  |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh sudo cat                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|            | /etc/ssl/certs/3ec20f2e.0                                             |                   |                   |         |                     |                     |
	| image      | functional-879800 image load --daemon                                 | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:21 UTC |
	|            | kicbase/echo-server:functional-879800                                 |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| docker-env | functional-879800 docker-env                                          | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:20 UTC |                     |
	| image      | functional-879800 image ls                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:21 UTC | 10 Sep 24 18:22 UTC |
	| image      | functional-879800 image load --daemon                                 | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:22 UTC | 10 Sep 24 18:23 UTC |
	|            | kicbase/echo-server:functional-879800                                 |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| image      | functional-879800 image ls                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:23 UTC | 10 Sep 24 18:24 UTC |
	| image      | functional-879800 image save kicbase/echo-server:functional-879800    | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:24 UTC |                     |
	|            | C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| ssh        | functional-879800 ssh sudo cat                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:24 UTC | 10 Sep 24 18:24 UTC |
	|            | /etc/test/nested/copy/4724/hosts                                      |                   |                   |         |                     |                     |
	| service    | functional-879800 service list                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:24 UTC |                     |
	|------------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:17:46
	Running on machine: minikube5
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:17:46.853450    8216 out.go:345] Setting OutFile to fd 1176 ...
	I0910 18:17:46.915440    8216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:17:46.915440    8216 out.go:358] Setting ErrFile to fd 1180...
	I0910 18:17:46.915440    8216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:17:46.934444    8216 out.go:352] Setting JSON to false
	I0910 18:17:46.938978    8216 start.go:129] hostinfo: {"hostname":"minikube5","uptime":103530,"bootTime":1725888736,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4842 Build 19045.4842","kernelVersion":"10.0.19045.4842 Build 19045.4842","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0910 18:17:46.938978    8216 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 18:17:46.945886    8216 out.go:177] * [functional-879800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	I0910 18:17:46.948694    8216 notify.go:220] Checking for updates...
	I0910 18:17:46.950952    8216 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:17:46.953367    8216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:17:46.955919    8216 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0910 18:17:46.958496    8216 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:17:46.961649    8216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	
	==> Docker <==
	Sep 10 18:27:38 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:27:38Z" level=error msg="Set backoffDuration to : 1m0s for container ID '07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4'"
	Sep 10 18:27:38 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:27:38Z" level=error msg="error getting RW layer size for container ID '7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:27:38 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:27:38Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79'"
	Sep 10 18:27:38 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:27:38Z" level=error msg="error getting RW layer size for container ID 'd9b410da7bc72151b811c961c3c96f068294a33b4530933fb52e14096ba5a543': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/d9b410da7bc72151b811c961c3c96f068294a33b4530933fb52e14096ba5a543/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:27:38 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:27:38Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd9b410da7bc72151b811c961c3c96f068294a33b4530933fb52e14096ba5a543'"
	Sep 10 18:27:38 functional-879800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 10 18:27:38 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:27:38Z" level=error msg="error getting RW layer size for container ID 'bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:27:38 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:27:38Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bc3783fda69760486e86e23fe4a3232adb5b5b862fcdafd5075b8ba8cc00ced6'"
	Sep 10 18:27:38 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:27:38Z" level=error msg="error getting RW layer size for container ID '5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:27:38 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:27:38Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead'"
	Sep 10 18:27:38 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:27:38Z" level=error msg="error getting RW layer size for container ID 'b547c299d7dd011f02489d4a697d59014bae3219915fe919939c2298c3d0ff72': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/b547c299d7dd011f02489d4a697d59014bae3219915fe919939c2298c3d0ff72/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:27:38 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:27:38Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'b547c299d7dd011f02489d4a697d59014bae3219915fe919939c2298c3d0ff72'"
	Sep 10 18:27:38 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:27:38Z" level=error msg="error getting RW layer size for container ID '95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:27:38 functional-879800 systemd[1]: Failed to start Docker Application Container Engine.
	Sep 10 18:27:38 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:27:38Z" level=error msg="Set backoffDuration to : 1m0s for container ID '95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035'"
	Sep 10 18:27:38 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:27:38Z" level=error msg="error getting RW layer size for container ID 'eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:27:38 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:27:38Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426'"
	Sep 10 18:27:38 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:27:38Z" level=error msg="error getting RW layer size for container ID 'ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:27:38 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:27:38Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19'"
	Sep 10 18:27:38 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:27:38Z" level=error msg="error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Sep 10 18:27:38 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:27:38Z" level=error msg="error getting RW layer size for container ID '6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:27:38 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:27:38Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b'"
	Sep 10 18:27:38 functional-879800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Sep 10 18:27:38 functional-879800 systemd[1]: Stopped Docker Application Container Engine.
	Sep 10 18:27:38 functional-879800 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-09-10T18:27:40Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.186798] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.179954] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.256144] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +1.037441] systemd-fstab-generator[4763]: Ignoring "noauto" option for root device
	[  +3.061765] kauditd_printk_skb: 214 callbacks suppressed
	[  +8.635556] kauditd_printk_skb: 31 callbacks suppressed
	[  +1.985505] systemd-fstab-generator[6074]: Ignoring "noauto" option for root device
	[ +13.788298] kauditd_printk_skb: 17 callbacks suppressed
	[Sep10 18:04] kauditd_printk_skb: 6 callbacks suppressed
	[ +16.694005] systemd-fstab-generator[6726]: Ignoring "noauto" option for root device
	[  +0.162430] kauditd_printk_skb: 23 callbacks suppressed
	[Sep10 18:06] hrtimer: interrupt took 3009709 ns
	[Sep10 18:07] systemd-fstab-generator[8301]: Ignoring "noauto" option for root device
	[  +0.146006] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.426119] systemd-fstab-generator[8351]: Ignoring "noauto" option for root device
	[  +0.222754] systemd-fstab-generator[8363]: Ignoring "noauto" option for root device
	[  +0.259233] systemd-fstab-generator[8377]: Ignoring "noauto" option for root device
	[  +5.236878] kauditd_printk_skb: 89 callbacks suppressed
	[Sep10 18:20] systemd-fstab-generator[12847]: Ignoring "noauto" option for root device
	[Sep10 18:21] systemd-fstab-generator[13028]: Ignoring "noauto" option for root device
	[  +0.120959] kauditd_printk_skb: 12 callbacks suppressed
	[Sep10 18:24] systemd-fstab-generator[14069]: Ignoring "noauto" option for root device
	[  +0.111028] kauditd_printk_skb: 12 callbacks suppressed
	[Sep10 18:25] systemd-fstab-generator[14587]: Ignoring "noauto" option for root device
	[  +0.122491] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:28:39 up 29 min,  0 users,  load average: 0.00, 0.00, 0.07
	Linux functional-879800 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 10 18:28:35 functional-879800 kubelet[6081]: I0910 18:28:35.641644    6081 status_manager.go:851] "Failed to get status for pod" podUID="a52b68ea1609e6e9c45e25b43c1638c7" pod="kube-system/kube-apiserver-functional-879800" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-879800\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.893843    6081 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.894164    6081 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.894230    6081 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.894287    6081 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: I0910 18:28:38.894325    6081 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.894396    6081 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.894454    6081 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.894588    6081 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.894641    6081 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.894681    6081 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.894936    6081 kubelet.go:2911] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.895026    6081 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.895078    6081 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.895126    6081 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: I0910 18:28:38.895381    6081 setters.go:600] "Node became not ready" node="functional-879800" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-09-10T18:28:38Z","lastTransitionTime":"2024-09-10T18:28:38Z","reason":"KubeletNotReady","message":"[container runtime is down, PLEG is not healthy: pleg was last seen active 21m17.424648397s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"}
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.903027    6081 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.903065    6081 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.903396    6081 kubelet.go:1446] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.903620    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:28:38Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:28:38Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:28:38Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:28:38Z\\\",\\\"lastTransitionTime\\\":\\\"2024-09-10T18:28:38Z\\\",\\\"message\\\":\\\"[container runtime is down, PLEG is not healthy: pleg was last seen active 21m17.424648397s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to
get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://Unknown\\\"}}}\" for node \"functional-879800\": Patch \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800/status?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.907035    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.907976    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.912650    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.913836    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:28:38 functional-879800 kubelet[6081]: E0910 18:28:38.913949    6081 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:25:38.050698   11568 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:25:38.112668   11568 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:25:38.187657   11568 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.47/containers/json?all=1&filters=%7B%22name%22%3A%7B%22k8s_coredns%22%3Atrue%7D%7D": dial unix /var/run/docker.sock: connect: permission denied
	E0910 18:26:38.312402   11568 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:27:38.412283   11568 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:27:38.442377   11568 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:27:38.490913   11568 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:27:38.525115   11568 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-879800 -n functional-879800
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-879800 -n functional-879800: exit status 2 (10.785956s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-879800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/MySQL (240.59s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (185.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-879800 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-879800 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (2.1750982s)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-879800 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-879800 -n functional-879800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-879800 -n functional-879800: exit status 2 (10.5356681s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 logs -n 25: (2m42.2729163s)
helpers_test.go:252: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	|-----------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|  Command  |                                                Args                                                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|-----------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| cp        | functional-879800 cp                                                                                | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC | 10 Sep 24 18:17 UTC |
	|           | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|           | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| config    | functional-879800 config unset                                                                      | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC | 10 Sep 24 18:17 UTC |
	|           | cpus                                                                                                |                   |                   |         |                     |                     |
	| config    | functional-879800 config get                                                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC |                     |
	|           | cpus                                                                                                |                   |                   |         |                     |                     |
	| config    | functional-879800 config set                                                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC | 10 Sep 24 18:17 UTC |
	|           | cpus 2                                                                                              |                   |                   |         |                     |                     |
	| config    | functional-879800 config get                                                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC | 10 Sep 24 18:17 UTC |
	|           | cpus                                                                                                |                   |                   |         |                     |                     |
	| config    | functional-879800 config unset                                                                      | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC | 10 Sep 24 18:17 UTC |
	|           | cpus                                                                                                |                   |                   |         |                     |                     |
	| config    | functional-879800 config get                                                                        | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC |                     |
	|           | cpus                                                                                                |                   |                   |         |                     |                     |
	| start     | -p functional-879800                                                                                | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC |                     |
	|           | --dry-run --memory                                                                                  |                   |                   |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                             |                   |                   |         |                     |                     |
	|           | --driver=hyperv                                                                                     |                   |                   |         |                     |                     |
	| start     | -p functional-879800                                                                                | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC |                     |
	|           | --dry-run --memory                                                                                  |                   |                   |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                             |                   |                   |         |                     |                     |
	|           | --driver=hyperv                                                                                     |                   |                   |         |                     |                     |
	| ssh       | functional-879800 ssh -n                                                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC | 10 Sep 24 18:17 UTC |
	|           | functional-879800 sudo cat                                                                          |                   |                   |         |                     |                     |
	|           | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| dashboard | --url --port 36195                                                                                  | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC |                     |
	|           | -p functional-879800                                                                                |                   |                   |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                              |                   |                   |         |                     |                     |
	| cp        | functional-879800 cp functional-879800:/home/docker/cp-test.txt                                     | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:17 UTC | 10 Sep 24 18:18 UTC |
	|           | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd1785398597\001\cp-test.txt |                   |                   |         |                     |                     |
	| ssh       | functional-879800 ssh -n                                                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|           | functional-879800 sudo cat                                                                          |                   |                   |         |                     |                     |
	|           | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| ssh       | functional-879800 ssh echo                                                                          | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|           | hello                                                                                               |                   |                   |         |                     |                     |
	| cp        | functional-879800 cp                                                                                | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|           | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|           | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| ssh       | functional-879800 ssh cat                                                                           | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|           | /etc/hostname                                                                                       |                   |                   |         |                     |                     |
	| ssh       | functional-879800 ssh -n                                                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|           | functional-879800 sudo cat                                                                          |                   |                   |         |                     |                     |
	|           | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| tunnel    | functional-879800 tunnel                                                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC |                     |
	|           | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| tunnel    | functional-879800 tunnel                                                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC |                     |
	|           | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| addons    | functional-879800 addons list                                                                       | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	| addons    | functional-879800 addons list                                                                       | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	|           | -o json                                                                                             |                   |                   |         |                     |                     |
	| license   |                                                                                                     | minikube          | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC | 10 Sep 24 18:18 UTC |
	| ssh       | functional-879800 ssh sudo                                                                          | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC |                     |
	|           | systemctl is-active crio                                                                            |                   |                   |         |                     |                     |
	| tunnel    | functional-879800 tunnel                                                                            | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC |                     |
	|           | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| image     | functional-879800 image load --daemon                                                               | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:18 UTC |                     |
	|           | kicbase/echo-server:functional-879800                                                               |                   |                   |         |                     |                     |
	|           | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	|-----------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:17:46
	Running on machine: minikube5
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:17:46.853450    8216 out.go:345] Setting OutFile to fd 1176 ...
	I0910 18:17:46.915440    8216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:17:46.915440    8216 out.go:358] Setting ErrFile to fd 1180...
	I0910 18:17:46.915440    8216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:17:46.934444    8216 out.go:352] Setting JSON to false
	I0910 18:17:46.938978    8216 start.go:129] hostinfo: {"hostname":"minikube5","uptime":103530,"bootTime":1725888736,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4842 Build 19045.4842","kernelVersion":"10.0.19045.4842 Build 19045.4842","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0910 18:17:46.938978    8216 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 18:17:46.945886    8216 out.go:177] * [functional-879800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	I0910 18:17:46.948694    8216 notify.go:220] Checking for updates...
	I0910 18:17:46.950952    8216 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:17:46.953367    8216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:17:46.955919    8216 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0910 18:17:46.958496    8216 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:17:46.961649    8216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	
	==> Docker <==
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="error getting RW layer size for container ID '5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:20:36 functional-879800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5156bce01a2cbdb771b4467933b07ff48c0db82628b2fa528dea5ae35c1b7ead'"
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="error getting RW layer size for container ID 'eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'eefc0cc1915285beaae397e7d703e5b131f1cc75af140b6d50a74174c8baf426'"
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="error getting RW layer size for container ID 'd9b410da7bc72151b811c961c3c96f068294a33b4530933fb52e14096ba5a543': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/d9b410da7bc72151b811c961c3c96f068294a33b4530933fb52e14096ba5a543/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd9b410da7bc72151b811c961c3c96f068294a33b4530933fb52e14096ba5a543'"
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="error getting RW layer size for container ID '6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6e134f080fb7a0b65691cf4520476ffa09a9fc6bfe051c3b41234f3d6329a08b'"
	Sep 10 18:20:36 functional-879800 systemd[1]: Failed to start Docker Application Container Engine.
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="error getting RW layer size for container ID 'ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ab25d8c1a3f8b1d19eb27fb7cfd98714da688f6c36e8a3a8df08dd9e29f39e19'"
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="error getting RW layer size for container ID '07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID '07900b60b5ab8a002c22a6dae8ff0687889edddc65d3ddac6ad414c4756086c4'"
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="error getting RW layer size for container ID '7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7378f8354a266232271bc5ebefcabdce7ec6f66e6704b1bc290547c65ef9aa79'"
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="error getting RW layer size for container ID 'b547c299d7dd011f02489d4a697d59014bae3219915fe919939c2298c3d0ff72': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/b547c299d7dd011f02489d4a697d59014bae3219915fe919939c2298c3d0ff72/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'b547c299d7dd011f02489d4a697d59014bae3219915fe919939c2298c3d0ff72'"
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="error getting RW layer size for container ID '8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8a9895975328dea96a49ad67c6baefc8d1eb243323a36d191c9fdfc1e58701b7'"
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="error getting RW layer size for container ID '32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID '32232c2de1ef098be2af9bc7bb5a96413ec5246f9aebbf725ba8e78fab7f0740'"
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="error getting RW layer size for container ID '95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:20:36 functional-879800 cri-dockerd[4491]: time="2024-09-10T18:20:36Z" level=error msg="Set backoffDuration to : 1m0s for container ID '95fffa728db0db55fa439e7c6fe920bf3fd7988279e4ee2991eb140f62d26035'"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-09-10T18:20:38Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.225711] systemd-fstab-generator[3804]: Ignoring "noauto" option for root device
	[  +0.257523] systemd-fstab-generator[3818]: Ignoring "noauto" option for root device
	[  +5.340147] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.877359] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.186798] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.179954] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.256144] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +1.037441] systemd-fstab-generator[4763]: Ignoring "noauto" option for root device
	[  +3.061765] kauditd_printk_skb: 214 callbacks suppressed
	[  +8.635556] kauditd_printk_skb: 31 callbacks suppressed
	[  +1.985505] systemd-fstab-generator[6074]: Ignoring "noauto" option for root device
	[ +13.788298] kauditd_printk_skb: 17 callbacks suppressed
	[Sep10 18:04] kauditd_printk_skb: 6 callbacks suppressed
	[ +16.694005] systemd-fstab-generator[6726]: Ignoring "noauto" option for root device
	[  +0.162430] kauditd_printk_skb: 23 callbacks suppressed
	[Sep10 18:06] hrtimer: interrupt took 3009709 ns
	[Sep10 18:07] systemd-fstab-generator[8301]: Ignoring "noauto" option for root device
	[  +0.146006] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.426119] systemd-fstab-generator[8351]: Ignoring "noauto" option for root device
	[  +0.222754] systemd-fstab-generator[8363]: Ignoring "noauto" option for root device
	[  +0.259233] systemd-fstab-generator[8377]: Ignoring "noauto" option for root device
	[  +5.236878] kauditd_printk_skb: 89 callbacks suppressed
	[Sep10 18:20] systemd-fstab-generator[12847]: Ignoring "noauto" option for root device
	[Sep10 18:21] systemd-fstab-generator[13028]: Ignoring "noauto" option for root device
	[  +0.120959] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:21:37 up 22 min,  0 users,  load average: 0.00, 0.05, 0.13
	Linux functional-879800 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.856297    6081 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/etcd-functional-879800.17f3f44148aec0f2\": dial tcp 172.31.208.92:8441: connect: connection refused" event="&Event{ObjectMeta:{etcd-functional-879800.17f3f44148aec0f2  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:etcd-functional-879800,UID:0d90ceb92018b92cb146f2e5e2e41150,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://127.0.0.1:2381/readyz\": dial tcp 127.0.0.1:2381: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-879800,},FirstTimestamp:2024-09-10 18:07:23.44249573 +0000 UTC m=+217.985375467,LastTimestamp:2024-09-10 18:07:25.44306816 +0000 UTC m=+219.98
5947797,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-879800,}"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.902304    6081 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.902350    6081 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.902364    6081 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.906648    6081 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.906692    6081 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.906798    6081 kubelet.go:2911] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.906839    6081 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: I0910 18:21:36.906910    6081 setters.go:600] "Node became not ready" node="functional-879800" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-09-10T18:21:36Z","lastTransitionTime":"2024-09-10T18:21:36Z","reason":"KubeletNotReady","message":"[container runtime is down, PLEG is not healthy: pleg was last seen active 14m15.436336479s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"}
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.912628    6081 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.912665    6081 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.912878    6081 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.912902    6081 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: I0910 18:21:36.912913    6081 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.913547    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:21:36Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:21:36Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:21:36Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-10T18:21:36Z\\\",\\\"lastTransitionTime\\\":\\\"2024-09-10T18:21:36Z\\\",\\\"message\\\":\\\"[container runtime is down, PLEG is not healthy: pleg was last seen active 14m15.436336479s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to
get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://Unknown\\\"}}}\" for node \"functional-879800\": Patch \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800/status?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.914506    6081 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.914560    6081 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.914864    6081 kubelet.go:1446] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.914910    6081 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.914931    6081 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.916844    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.918137    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.919948    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.921317    6081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-879800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-879800?timeout=10s\": dial tcp 172.31.208.92:8441: connect: connection refused"
	Sep 10 18:21:36 functional-879800 kubelet[6081]: E0910 18:21:36.921849    6081 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:19:36.231035   12228 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:19:36.264189   12228 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:19:36.312022   12228 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:19:36.353973   12228 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:19:36.381973   12228 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:20:36.481370   12228 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:20:36.518432   12228 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0910 18:20:36.549442   12228 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-879800 -n functional-879800
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-879800 -n functional-879800: exit status 2 (10.7308775s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-879800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (185.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (7.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-879800 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-879800 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I0910 18:18:26.519521     760 out.go:345] Setting OutFile to fd 1272 ...
I0910 18:18:26.609207     760 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:18:26.609290     760 out.go:358] Setting ErrFile to fd 1268...
I0910 18:18:26.609290     760 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:18:26.624920     760 mustload.go:65] Loading cluster: functional-879800
I0910 18:18:26.624920     760 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 18:18:26.625916     760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
I0910 18:18:28.933392     760 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 18:18:28.933933     760 main.go:141] libmachine: [stderr =====>] : 
I0910 18:18:28.933933     760 host.go:66] Checking if "functional-879800" exists ...
I0910 18:18:28.934827     760 api_server.go:166] Checking apiserver status ...
I0910 18:18:28.948403     760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0910 18:18:28.948403     760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
I0910 18:18:31.008987     760 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 18:18:31.008987     760 main.go:141] libmachine: [stderr =====>] : 
I0910 18:18:31.008987     760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
I0910 18:18:33.534051     760 main.go:141] libmachine: [stdout =====>] : 172.31.208.92

                                                
                                                
I0910 18:18:33.534102     760 main.go:141] libmachine: [stderr =====>] : 
I0910 18:18:33.534102     760 sshutil.go:53] new ssh client: &{IP:172.31.208.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-879800\id_rsa Username:docker}
I0910 18:18:33.647135     760 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.6984148s)
W0910 18:18:33.647135     760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0910 18:18:33.650999     760 out.go:177] * The control-plane node functional-879800 apiserver is not running: (state=Stopped)
I0910 18:18:33.653213     760 out.go:177]   To start a cluster, run: "minikube start -p functional-879800"

                                                
                                                
stdout: * The control-plane node functional-879800 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-879800"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-879800 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-879800 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-879800 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-879800 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 5416: Access is denied.
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-879800 tunnel --alsologtostderr] stdout:
* The control-plane node functional-879800 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-879800"
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-879800 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (7.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (4.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-879800 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-879800 apply -f testdata\testsvc.yaml: exit status 1 (4.2050411s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\testsvc.yaml": error validating data: failed to download openapi: Get "https://172.31.208.92:8441/openapi/v2?timeout=32s": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-879800 apply -f testdata\testsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (4.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (48.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 image ls --format short --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 image ls --format short --alsologtostderr: (48.8986816s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-879800 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-879800 image ls --format short --alsologtostderr:
I0910 18:28:50.319903    2780 out.go:345] Setting OutFile to fd 1424 ...
I0910 18:28:50.374467    2780 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:28:50.374467    2780 out.go:358] Setting ErrFile to fd 1464...
I0910 18:28:50.374467    2780 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:28:50.384975    2780 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 18:28:50.387469    2780 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 18:28:50.387798    2780 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
I0910 18:28:52.293357    2780 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 18:28:52.293460    2780 main.go:141] libmachine: [stderr =====>] : 
I0910 18:28:52.308987    2780 ssh_runner.go:195] Run: systemctl --version
I0910 18:28:52.308987    2780 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
I0910 18:28:54.182589    2780 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 18:28:54.182589    2780 main.go:141] libmachine: [stderr =====>] : 
I0910 18:28:54.182791    2780 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
I0910 18:28:56.398225    2780 main.go:141] libmachine: [stdout =====>] : 172.31.208.92

                                                
                                                
I0910 18:28:56.399669    2780 main.go:141] libmachine: [stderr =====>] : 
I0910 18:28:56.399669    2780 sshutil.go:53] new ssh client: &{IP:172.31.208.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-879800\id_rsa Username:docker}
I0910 18:28:56.495172    2780 ssh_runner.go:235] Completed: systemctl --version: (4.1859067s)
I0910 18:28:56.503887    2780 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0910 18:29:39.069173    2780 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (42.5623975s)
W0910 18:29:39.069389    2780 cache_images.go:734] Failed to list images for profile functional-879800 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (48.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (60.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 image ls --format table --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 image ls --format table --alsologtostderr: (1m0.2430677s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-879800 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-879800 image ls --format table --alsologtostderr:
I0910 18:30:39.423416    6540 out.go:345] Setting OutFile to fd 1508 ...
I0910 18:30:39.503875    6540 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:30:39.503928    6540 out.go:358] Setting ErrFile to fd 1504...
I0910 18:30:39.503928    6540 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:30:39.517067    6540 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 18:30:39.517611    6540 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 18:30:39.518419    6540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
I0910 18:30:41.604955    6540 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 18:30:41.605046    6540 main.go:141] libmachine: [stderr =====>] : 
I0910 18:30:41.614431    6540 ssh_runner.go:195] Run: systemctl --version
I0910 18:30:41.614431    6540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
I0910 18:30:43.664676    6540 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 18:30:43.675194    6540 main.go:141] libmachine: [stderr =====>] : 
I0910 18:30:43.675194    6540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
I0910 18:30:46.098960    6540 main.go:141] libmachine: [stdout =====>] : 172.31.208.92

                                                
                                                
I0910 18:30:46.098960    6540 main.go:141] libmachine: [stderr =====>] : 
I0910 18:30:46.099788    6540 sshutil.go:53] new ssh client: &{IP:172.31.208.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-879800\id_rsa Username:docker}
I0910 18:30:46.195589    6540 ssh_runner.go:235] Completed: systemctl --version: (4.5808057s)
I0910 18:30:46.201796    6540 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0910 18:31:39.512361    6540 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (53.3070642s)
W0910 18:31:39.512361    6540 cache_images.go:734] Failed to list images for profile functional-879800 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (60.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (60.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 image ls --format json --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 image ls --format json --alsologtostderr: (1m0.1787543s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-879800 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-879800 image ls --format json --alsologtostderr:
I0910 18:29:39.235795    8668 out.go:345] Setting OutFile to fd 672 ...
I0910 18:29:39.305404    8668 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:29:39.305484    8668 out.go:358] Setting ErrFile to fd 1200...
I0910 18:29:39.305484    8668 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:29:39.331399    8668 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 18:29:39.332236    8668 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 18:29:39.332452    8668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
I0910 18:29:41.190899    8668 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 18:29:41.190899    8668 main.go:141] libmachine: [stderr =====>] : 
I0910 18:29:41.201085    8668 ssh_runner.go:195] Run: systemctl --version
I0910 18:29:41.201085    8668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
I0910 18:29:43.098899    8668 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 18:29:43.098899    8668 main.go:141] libmachine: [stderr =====>] : 
I0910 18:29:43.099165    8668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
I0910 18:29:45.326079    8668 main.go:141] libmachine: [stdout =====>] : 172.31.208.92

                                                
                                                
I0910 18:29:45.326079    8668 main.go:141] libmachine: [stderr =====>] : 
I0910 18:29:45.332422    8668 sshutil.go:53] new ssh client: &{IP:172.31.208.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-879800\id_rsa Username:docker}
I0910 18:29:45.431200    8668 ssh_runner.go:235] Completed: systemctl --version: (4.2298348s)
I0910 18:29:45.438991    8668 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0910 18:30:39.276661    8668 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (53.8339811s)
W0910 18:30:39.276857    8668 cache_images.go:734] Failed to list images for profile functional-879800 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (60.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (47.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 image ls --format yaml --alsologtostderr
E0910 18:29:10.525341    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 image ls --format yaml --alsologtostderr: (47.0744866s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-879800 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-879800 image ls --format yaml --alsologtostderr:
I0910 18:28:52.151390    3820 out.go:345] Setting OutFile to fd 1512 ...
I0910 18:28:52.218324    3820 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:28:52.218324    3820 out.go:358] Setting ErrFile to fd 1516...
I0910 18:28:52.218324    3820 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:28:52.230773    3820 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 18:28:52.231122    3820 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 18:28:52.231433    3820 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
I0910 18:28:54.155175    3820 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 18:28:54.155175    3820 main.go:141] libmachine: [stderr =====>] : 
I0910 18:28:54.163386    3820 ssh_runner.go:195] Run: systemctl --version
I0910 18:28:54.163386    3820 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
I0910 18:28:56.020170    3820 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 18:28:56.026190    3820 main.go:141] libmachine: [stderr =====>] : 
I0910 18:28:56.026275    3820 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
I0910 18:28:58.158036    3820 main.go:141] libmachine: [stdout =====>] : 172.31.208.92

                                                
                                                
I0910 18:28:58.158036    3820 main.go:141] libmachine: [stderr =====>] : 
I0910 18:28:58.168195    3820 sshutil.go:53] new ssh client: &{IP:172.31.208.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-879800\id_rsa Username:docker}
I0910 18:28:58.268612    3820 ssh_runner.go:235] Completed: systemctl --version: (4.1049537s)
I0910 18:28:58.280311    3820 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0910 18:29:39.084022    3820 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (40.8010046s)
W0910 18:29:39.084152    3820 cache_images.go:734] Failed to list images for profile functional-879800 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (47.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (120.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-879800 ssh pgrep buildkitd: exit status 1 (8.1366353s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 image build -t localhost/my-image:functional-879800 testdata\build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 image build -t localhost/my-image:functional-879800 testdata\build --alsologtostderr: (52.1097586s)
functional_test.go:323: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-879800 image build -t localhost/my-image:functional-879800 testdata\build --alsologtostderr:
I0910 18:29:47.334897    3924 out.go:345] Setting OutFile to fd 1508 ...
I0910 18:29:47.406007    3924 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:29:47.406007    3924 out.go:358] Setting ErrFile to fd 1504...
I0910 18:29:47.406007    3924 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:29:47.419153    3924 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 18:29:47.436612    3924 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 18:29:47.437087    3924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
I0910 18:29:49.237109    3924 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 18:29:49.237109    3924 main.go:141] libmachine: [stderr =====>] : 
I0910 18:29:49.245786    3924 ssh_runner.go:195] Run: systemctl --version
I0910 18:29:49.245786    3924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-879800 ).state
I0910 18:29:50.988185    3924 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 18:29:50.998093    3924 main.go:141] libmachine: [stderr =====>] : 
I0910 18:29:50.998093    3924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-879800 ).networkadapters[0]).ipaddresses[0]
I0910 18:29:53.122286    3924 main.go:141] libmachine: [stdout =====>] : 172.31.208.92

                                                
                                                
I0910 18:29:53.126226    3924 main.go:141] libmachine: [stderr =====>] : 
I0910 18:29:53.126281    3924 sshutil.go:53] new ssh client: &{IP:172.31.208.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-879800\id_rsa Username:docker}
I0910 18:29:53.223304    3924 ssh_runner.go:235] Completed: systemctl --version: (3.9772553s)
I0910 18:29:53.223304    3924 build_images.go:161] Building image from path: C:\Users\jenkins.minikube5\AppData\Local\Temp\build.3537609568.tar
I0910 18:29:53.232766    3924 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0910 18:29:53.258534    3924 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3537609568.tar
I0910 18:29:53.265733    3924 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3537609568.tar: stat -c "%s %y" /var/lib/minikube/build/build.3537609568.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3537609568.tar': No such file or directory
I0910 18:29:53.265733    3924 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\AppData\Local\Temp\build.3537609568.tar --> /var/lib/minikube/build/build.3537609568.tar (3072 bytes)
I0910 18:29:53.313115    3924 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3537609568
I0910 18:29:53.340176    3924 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3537609568 -xf /var/lib/minikube/build/build.3537609568.tar
I0910 18:29:53.353901    3924 docker.go:360] Building image: /var/lib/minikube/build/build.3537609568
I0910 18:29:53.361939    3924 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-879800 /var/lib/minikube/build/build.3537609568
ERROR: error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
I0910 18:30:39.300817    3924 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-879800 /var/lib/minikube/build/build.3537609568: (45.9358436s)
W0910 18:30:39.301026    3924 build_images.go:125] Failed to build image for profile functional-879800. make sure the profile is running. Docker build /var/lib/minikube/build/build.3537609568.tar: buildimage docker: docker build -t localhost/my-image:functional-879800 /var/lib/minikube/build/build.3537609568: Process exited with status 1
stdout:

                                                
                                                
stderr:
ERROR: error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
I0910 18:30:39.301080    3924 build_images.go:133] succeeded building to: 
I0910 18:30:39.301080    3924 build_images.go:134] failed building to: functional-879800
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 image ls: (1m0.1740344s)
functional_test.go:446: expected "localhost/my-image:functional-879800" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (120.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (116.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 image load --daemon kicbase/echo-server:functional-879800 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 image load --daemon kicbase/echo-server:functional-879800 --alsologtostderr: (56.2029446s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 image ls: (1m0.2502729s)
functional_test.go:446: expected "kicbase/echo-server:functional-879800" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (116.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (120.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 image load --daemon kicbase/echo-server:functional-879800 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 image load --daemon kicbase/echo-server:functional-879800 --alsologtostderr: (1m0.2076912s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 image ls: (1m0.4198682s)
functional_test.go:446: expected "kicbase/echo-server:functional-879800" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (120.63s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (421.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:499: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-879800 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-879800"
functional_test.go:499: (dbg) Non-zero exit: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-879800 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-879800": exit status 1 (7m1.8380959s)

                                                
                                                
** stderr ** 
	X Exiting due to MK_DOCKER_SCRIPT: Error generating set output: write /dev/stdout: The pipe is being closed.
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_logs_813ab506d96508632aa6f9e536a5253c5e1ca78d_32.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	E0910 18:27:38.549500    6132 out.go:221] Fprintf failed: write /dev/stdout: The pipe is being closed.

                                                
                                                
** /stderr **
functional_test.go:502: failed to run the command by deadline. exceeded timeout. powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-879800 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-879800"
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/powershell (421.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (120.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-879800
functional_test.go:245: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 image load --daemon kicbase/echo-server:functional-879800 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 image load --daemon kicbase/echo-server:functional-879800 --alsologtostderr: (59.5199034s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 image ls
E0910 18:24:10.499964    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 image ls: (1m0.2076958s)
functional_test.go:446: expected "kicbase/echo-server:functional-879800" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (120.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (60.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 image save kicbase/echo-server:functional-879800 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 image save kicbase/echo-server:functional-879800 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (1m0.4649033s)
functional_test.go:386: expected "C:\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (60.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-879800 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1439: (dbg) Non-zero exit: kubectl --context functional-879800 create deployment hello-node --image=registry.k8s.io/echoserver:1.8: exit status 1 (2.149141s)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://172.31.208.92:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 172.31.208.92:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-879800 create deployment hello-node --image=registry.k8s.io/echoserver:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (6.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-879800 service list: exit status 103 (6.5206337s)

                                                
                                                
-- stdout --
	* The control-plane node functional-879800 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-879800"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-windows-amd64.exe -p functional-879800 service list" : exit status 103
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-879800 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-879800\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (6.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (6.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-879800 service list -o json: exit status 103 (6.4714005s)

                                                
                                                
-- stdout --
	* The control-plane node functional-879800 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-879800"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-windows-amd64.exe -p functional-879800 service list -o json": exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (6.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (6.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-879800 service --namespace=default --https --url hello-node: exit status 103 (6.3420831s)

                                                
                                                
-- stdout --
	* The control-plane node functional-879800 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-879800"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-879800 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (6.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (6.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-879800 service hello-node --url --format={{.IP}}: exit status 103 (6.27417s)

                                                
                                                
-- stdout --
	* The control-plane node functional-879800 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-879800"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-879800 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1548: "* The control-plane node functional-879800 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-879800\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (6.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (6.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-879800 service hello-node --url: exit status 103 (6.2518603s)

                                                
                                                
-- stdout --
	* The control-plane node functional-879800 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-879800"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-879800 service hello-node --url": exit status 103
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-879800 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-879800"
functional_test.go:1569: failed to parse "* The control-plane node functional-879800 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-879800\"": parse "* The control-plane node functional-879800 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-879800\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (6.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-879800 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: exit status 80 (369.5764ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:27:38.569595    4488 out.go:345] Setting OutFile to fd 1020 ...
	I0910 18:27:38.657571    4488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:27:38.657571    4488 out.go:358] Setting ErrFile to fd 1348...
	I0910 18:27:38.657571    4488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:27:38.670113    4488 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:27:38.671475    4488 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\C_\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	I0910 18:27:38.780155    4488 cache.go:107] acquiring lock: {Name:mkab03a876aa3cd2aa4cbc5169fcc047637169c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:27:38.781994    4488 cache.go:96] cache image "C:\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar" -> "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar" took 110.5117ms
	I0910 18:27:38.786135    4488 out.go:201] 
	W0910 18:27:38.788672    4488 out.go:270] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	W0910 18:27:38.788672    4488 out.go:270] * 
	* 
	W0910 18:27:38.796324    4488 out.go:293] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_image_9d334fddf764ec6d7b0708a9057c4c5712610888_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_image_9d334fddf764ec6d7b0708a9057c4c5712610888_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 18:27:38.798030    4488 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:411: loading image into minikube from file: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:27:38.569595    4488 out.go:345] Setting OutFile to fd 1020 ...
	I0910 18:27:38.657571    4488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:27:38.657571    4488 out.go:358] Setting ErrFile to fd 1348...
	I0910 18:27:38.657571    4488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:27:38.670113    4488 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:27:38.671475    4488 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\C_\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	I0910 18:27:38.780155    4488 cache.go:107] acquiring lock: {Name:mkab03a876aa3cd2aa4cbc5169fcc047637169c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:27:38.781994    4488 cache.go:96] cache image "C:\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar" -> "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar" took 110.5117ms
	I0910 18:27:38.786135    4488 out.go:201] 
	W0910 18:27:38.788672    4488 out.go:270] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	W0910 18:27:38.788672    4488 out.go:270] * 
	* 
	W0910 18:27:38.796324    4488 out.go:293] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_image_9d334fddf764ec6d7b0708a9057c4c5712610888_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_image_9d334fddf764ec6d7b0708a9057c4c5712610888_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 18:27:38.798030    4488 out.go:201] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (63.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-301400 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-301400 -- exec busybox-7dff88458-d2tcx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-301400 -- exec busybox-7dff88458-d2tcx -- sh -c "ping -c 1 172.31.208.1"
E0910 18:44:10.582823    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-301400 -- exec busybox-7dff88458-d2tcx -- sh -c "ping -c 1 172.31.208.1": exit status 1 (10.4236521s)

                                                
                                                
-- stdout --
	PING 172.31.208.1 (172.31.208.1): 56 data bytes
	
	--- 172.31.208.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.31.208.1) from pod (busybox-7dff88458-d2tcx): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-301400 -- exec busybox-7dff88458-lnwzg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-301400 -- exec busybox-7dff88458-lnwzg -- sh -c "ping -c 1 172.31.208.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-301400 -- exec busybox-7dff88458-lnwzg -- sh -c "ping -c 1 172.31.208.1": exit status 1 (10.4128345s)

                                                
                                                
-- stdout --
	PING 172.31.208.1 (172.31.208.1): 56 data bytes
	
	--- 172.31.208.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.31.208.1) from pod (busybox-7dff88458-lnwzg): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-301400 -- exec busybox-7dff88458-wbkmw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-301400 -- exec busybox-7dff88458-wbkmw -- sh -c "ping -c 1 172.31.208.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-301400 -- exec busybox-7dff88458-wbkmw -- sh -c "ping -c 1 172.31.208.1": exit status 1 (10.4117593s)

                                                
                                                
-- stdout --
	PING 172.31.208.1 (172.31.208.1): 56 data bytes
	
	--- 172.31.208.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.31.208.1) from pod (busybox-7dff88458-wbkmw): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-301400 -n ha-301400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-301400 -n ha-301400: (10.7420489s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 logs -n 25: (8.0593767s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-879800 image build -t     | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:29 UTC | 10 Sep 24 18:30 UTC |
	|         | localhost/my-image:functional-879800 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-879800                    | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:30 UTC | 10 Sep 24 18:31 UTC |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-879800 image ls           | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:30 UTC | 10 Sep 24 18:31 UTC |
	| delete  | -p functional-879800                 | functional-879800 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:31 UTC | 10 Sep 24 18:32 UTC |
	| start   | -p ha-301400 --wait=true             | ha-301400         | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:32 UTC | 10 Sep 24 18:43 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-301400 -- apply -f             | ha-301400         | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:43 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-301400 -- rollout status       | ha-301400         | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:43 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-301400 -- get pods -o          | ha-301400         | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:43 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-301400 -- get pods -o          | ha-301400         | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:43 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-301400 -- exec                 | ha-301400         | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:43 UTC |
	|         | busybox-7dff88458-d2tcx --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-301400 -- exec                 | ha-301400         | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:43 UTC |
	|         | busybox-7dff88458-lnwzg --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-301400 -- exec                 | ha-301400         | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:43 UTC |
	|         | busybox-7dff88458-wbkmw --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-301400 -- exec                 | ha-301400         | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:43 UTC |
	|         | busybox-7dff88458-d2tcx --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-301400 -- exec                 | ha-301400         | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:43 UTC |
	|         | busybox-7dff88458-lnwzg --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-301400 -- exec                 | ha-301400         | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:43 UTC |
	|         | busybox-7dff88458-wbkmw --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-301400 -- exec                 | ha-301400         | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:43 UTC |
	|         | busybox-7dff88458-d2tcx -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-301400 -- exec                 | ha-301400         | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:43 UTC |
	|         | busybox-7dff88458-lnwzg -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-301400 -- exec                 | ha-301400         | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:44 UTC |
	|         | busybox-7dff88458-wbkmw -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-301400 -- get pods -o          | ha-301400         | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:44 UTC | 10 Sep 24 18:44 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-301400 -- exec                 | ha-301400         | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:44 UTC | 10 Sep 24 18:44 UTC |
	|         | busybox-7dff88458-d2tcx              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-301400 -- exec                 | ha-301400         | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:44 UTC |                     |
	|         | busybox-7dff88458-d2tcx -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.31.208.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-301400 -- exec                 | ha-301400         | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:44 UTC | 10 Sep 24 18:44 UTC |
	|         | busybox-7dff88458-lnwzg              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-301400 -- exec                 | ha-301400         | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:44 UTC |                     |
	|         | busybox-7dff88458-lnwzg -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.31.208.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-301400 -- exec                 | ha-301400         | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:44 UTC | 10 Sep 24 18:44 UTC |
	|         | busybox-7dff88458-wbkmw              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-301400 -- exec                 | ha-301400         | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:44 UTC |                     |
	|         | busybox-7dff88458-wbkmw -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.31.208.1            |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:32:58
	Running on machine: minikube5
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:32:58.228794   10084 out.go:345] Setting OutFile to fd 672 ...
	I0910 18:32:58.273534   10084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:32:58.273534   10084 out.go:358] Setting ErrFile to fd 1288...
	I0910 18:32:58.273534   10084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:32:58.292400   10084 out.go:352] Setting JSON to false
	I0910 18:32:58.294449   10084 start.go:129] hostinfo: {"hostname":"minikube5","uptime":104441,"bootTime":1725888736,"procs":179,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4842 Build 19045.4842","kernelVersion":"10.0.19045.4842 Build 19045.4842","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0910 18:32:58.294449   10084 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 18:32:58.298662   10084 out.go:177] * [ha-301400] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	I0910 18:32:58.304358   10084 notify.go:220] Checking for updates...
	I0910 18:32:58.304358   10084 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:32:58.307226   10084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:32:58.309546   10084 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0910 18:32:58.311640   10084 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:32:58.313397   10084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:32:58.316226   10084 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:33:03.082908   10084 out.go:177] * Using the hyperv driver based on user configuration
	I0910 18:33:03.087238   10084 start.go:297] selected driver: hyperv
	I0910 18:33:03.087238   10084 start.go:901] validating driver "hyperv" against <nil>
	I0910 18:33:03.087238   10084 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:33:03.127579   10084 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 18:33:03.128571   10084 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:33:03.128658   10084 cni.go:84] Creating CNI manager for ""
	I0910 18:33:03.128658   10084 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0910 18:33:03.128658   10084 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0910 18:33:03.128812   10084 start.go:340] cluster config:
	{Name:ha-301400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-301400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:33:03.129006   10084 iso.go:125] acquiring lock: {Name:mkb663b978f90cccd76348bddb3ea027f6ac4252 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:33:03.134487   10084 out.go:177] * Starting "ha-301400" primary control-plane node in "ha-301400" cluster
	I0910 18:33:03.139045   10084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 18:33:03.139207   10084 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0910 18:33:03.139289   10084 cache.go:56] Caching tarball of preloaded images
	I0910 18:33:03.139289   10084 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0910 18:33:03.139289   10084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 18:33:03.139289   10084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json ...
	I0910 18:33:03.140434   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json: {Name:mkdbfef16912851ddb95bf4da9e8b839c6383d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:33:03.141517   10084 start.go:360] acquireMachinesLock for ha-301400: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:33:03.141606   10084 start.go:364] duration metric: took 34.5µs to acquireMachinesLock for "ha-301400"
	I0910 18:33:03.141950   10084 start.go:93] Provisioning new machine with config: &{Name:ha-301400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.0 ClusterName:ha-301400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 18:33:03.142166   10084 start.go:125] createHost starting for "" (driver="hyperv")
	I0910 18:33:03.146455   10084 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 18:33:03.146753   10084 start.go:159] libmachine.API.Create for "ha-301400" (driver="hyperv")
	I0910 18:33:03.146829   10084 client.go:168] LocalClient.Create starting
	I0910 18:33:03.147329   10084 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0910 18:33:03.147588   10084 main.go:141] libmachine: Decoding PEM data...
	I0910 18:33:03.147655   10084 main.go:141] libmachine: Parsing certificate...
	I0910 18:33:03.147864   10084 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0910 18:33:03.148075   10084 main.go:141] libmachine: Decoding PEM data...
	I0910 18:33:03.148120   10084 main.go:141] libmachine: Parsing certificate...
	I0910 18:33:03.148215   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0910 18:33:05.004338   10084 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0910 18:33:05.004338   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:05.004620   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0910 18:33:06.552617   10084 main.go:141] libmachine: [stdout =====>] : False
	
	I0910 18:33:06.553226   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:06.553226   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0910 18:33:07.893295   10084 main.go:141] libmachine: [stdout =====>] : True
	
	I0910 18:33:07.893295   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:07.893388   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0910 18:33:11.070577   10084 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0910 18:33:11.070577   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:11.071993   10084 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 18:33:11.430865   10084 main.go:141] libmachine: Creating SSH key...
	I0910 18:33:11.534293   10084 main.go:141] libmachine: Creating VM...
	I0910 18:33:11.534293   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0910 18:33:14.070238   10084 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0910 18:33:14.070238   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:14.070352   10084 main.go:141] libmachine: Using switch "Default Switch"
	I0910 18:33:14.070552   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0910 18:33:15.659152   10084 main.go:141] libmachine: [stdout =====>] : True
	
	I0910 18:33:15.659319   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:15.659319   10084 main.go:141] libmachine: Creating VHD
	I0910 18:33:15.659319   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0910 18:33:19.007170   10084 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 04334C72-017E-4C24-A94A-4A6E611E26DF
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0910 18:33:19.007247   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:19.007247   10084 main.go:141] libmachine: Writing magic tar header
	I0910 18:33:19.007321   10084 main.go:141] libmachine: Writing SSH key tar header
	I0910 18:33:19.017343   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0910 18:33:21.977407   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:21.977407   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:21.978557   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\disk.vhd' -SizeBytes 20000MB
	I0910 18:33:24.310375   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:24.310375   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:24.310953   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-301400 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0910 18:33:27.584997   10084 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-301400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0910 18:33:27.584997   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:27.584997   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-301400 -DynamicMemoryEnabled $false
	I0910 18:33:29.586782   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:29.586782   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:29.586782   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-301400 -Count 2
	I0910 18:33:31.507273   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:31.507273   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:31.508185   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-301400 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\boot2docker.iso'
	I0910 18:33:33.818155   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:33.818155   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:33.818879   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-301400 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\disk.vhd'
	I0910 18:33:36.101335   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:36.101335   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:36.101335   10084 main.go:141] libmachine: Starting VM...
	I0910 18:33:36.101981   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-301400
	I0910 18:33:38.878991   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:38.878991   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:38.878991   10084 main.go:141] libmachine: Waiting for host to start...
	I0910 18:33:38.879496   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:33:40.905504   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:33:40.905504   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:40.905504   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:33:43.197917   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:43.197917   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:44.201990   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:33:46.169313   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:33:46.169313   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:46.170456   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:33:48.414184   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:48.414184   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:49.416861   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:33:51.352272   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:33:51.352272   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:51.352695   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:33:53.575399   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:53.575399   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:54.588365   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:33:56.541809   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:33:56.541856   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:56.541856   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:33:58.739376   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:58.739596   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:59.754450   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:01.729937   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:01.729937   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:01.730026   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:04.086786   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:04.086786   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:04.086869   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:06.052400   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:06.053319   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:06.053319   10084 machine.go:93] provisionDockerMachine start ...
	I0910 18:34:06.053319   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:08.017485   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:08.017485   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:08.017598   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:10.340818   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:10.340818   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:10.347224   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:34:10.360971   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.216.168 22 <nil> <nil>}
	I0910 18:34:10.360971   10084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:34:10.498909   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:34:10.498999   10084 buildroot.go:166] provisioning hostname "ha-301400"
	I0910 18:34:10.498999   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:12.413611   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:12.413611   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:12.413738   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:14.690664   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:14.690664   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:14.695308   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:34:14.695605   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.216.168 22 <nil> <nil>}
	I0910 18:34:14.695782   10084 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-301400 && echo "ha-301400" | sudo tee /etc/hostname
	I0910 18:34:14.858610   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-301400
	
	I0910 18:34:14.858840   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:16.780677   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:16.781642   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:16.781642   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:19.025595   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:19.025595   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:19.031327   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:34:19.031327   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.216.168 22 <nil> <nil>}
	I0910 18:34:19.031327   10084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-301400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-301400/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-301400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:34:19.178109   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:34:19.178109   10084 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0910 18:34:19.178109   10084 buildroot.go:174] setting up certificates
	I0910 18:34:19.178109   10084 provision.go:84] configureAuth start
	I0910 18:34:19.178109   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:21.067254   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:21.067619   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:21.067619   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:23.357238   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:23.357238   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:23.358176   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:25.243907   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:25.244092   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:25.244092   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:27.467224   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:27.467224   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:27.467224   10084 provision.go:143] copyHostCerts
	I0910 18:34:27.467224   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0910 18:34:27.467224   10084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0910 18:34:27.467224   10084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0910 18:34:27.468260   10084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0910 18:34:27.469227   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0910 18:34:27.469227   10084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0910 18:34:27.469227   10084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0910 18:34:27.469227   10084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0910 18:34:27.470229   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0910 18:34:27.470229   10084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0910 18:34:27.470229   10084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0910 18:34:27.470229   10084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0910 18:34:27.471227   10084 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-301400 san=[127.0.0.1 172.31.216.168 ha-301400 localhost minikube]
	I0910 18:34:27.741464   10084 provision.go:177] copyRemoteCerts
	I0910 18:34:27.749368   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:34:27.749368   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:29.669732   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:29.669732   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:29.670485   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:31.899271   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:31.899271   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:31.900175   10084 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 18:34:32.008680   10084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2589948s)
	I0910 18:34:32.008680   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0910 18:34:32.009052   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 18:34:32.054066   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0910 18:34:32.055041   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0910 18:34:32.092681   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0910 18:34:32.092681   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:34:32.134940   10084 provision.go:87] duration metric: took 12.9559642s to configureAuth
	I0910 18:34:32.134940   10084 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:34:32.135545   10084 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:34:32.135545   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:33.962770   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:33.962770   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:33.963062   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:36.209371   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:36.209371   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:36.215564   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:34:36.215564   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.216.168 22 <nil> <nil>}
	I0910 18:34:36.215564   10084 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 18:34:36.354095   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 18:34:36.354221   10084 buildroot.go:70] root file system type: tmpfs
	I0910 18:34:36.354519   10084 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 18:34:36.354519   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:38.250553   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:38.250553   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:38.251099   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:40.607577   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:40.607577   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:40.614022   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:34:40.614515   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.216.168 22 <nil> <nil>}
	I0910 18:34:40.614620   10084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 18:34:40.774118   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 18:34:40.774213   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:42.734020   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:42.734020   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:42.734531   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:45.000157   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:45.000157   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:45.004092   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:34:45.004567   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.216.168 22 <nil> <nil>}
	I0910 18:34:45.004567   10084 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 18:34:47.150506   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 18:34:47.150506   10084 machine.go:96] duration metric: took 41.0944394s to provisionDockerMachine
	I0910 18:34:47.151197   10084 client.go:171] duration metric: took 1m43.9974158s to LocalClient.Create
	I0910 18:34:47.151197   10084 start.go:167] duration metric: took 1m43.997536s to libmachine.API.Create "ha-301400"
	I0910 18:34:47.151197   10084 start.go:293] postStartSetup for "ha-301400" (driver="hyperv")
	I0910 18:34:47.151332   10084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:34:47.162151   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:34:47.162151   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:49.020072   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:49.021078   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:49.021143   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:51.277245   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:51.277245   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:51.278106   10084 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 18:34:51.389724   10084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2266551s)
	I0910 18:34:51.398951   10084 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:34:51.404411   10084 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:34:51.404465   10084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0910 18:34:51.404772   10084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0910 18:34:51.405431   10084 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> 47242.pem in /etc/ssl/certs
	I0910 18:34:51.405517   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /etc/ssl/certs/47242.pem
	I0910 18:34:51.414237   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:34:51.429404   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /etc/ssl/certs/47242.pem (1708 bytes)
	I0910 18:34:51.470448   10084 start.go:296] duration metric: took 4.3188259s for postStartSetup
	I0910 18:34:51.473342   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:53.323075   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:53.323159   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:53.323159   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:55.591204   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:55.591204   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:55.591423   10084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json ...
	I0910 18:34:55.593214   10084 start.go:128] duration metric: took 1m52.4435735s to createHost
	I0910 18:34:55.593214   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:57.494331   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:57.494331   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:57.494331   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:59.754743   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:59.754798   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:59.758312   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:34:59.758830   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.216.168 22 <nil> <nil>}
	I0910 18:34:59.758830   10084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:34:59.899077   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725993300.120154858
	
	I0910 18:34:59.899077   10084 fix.go:216] guest clock: 1725993300.120154858
	I0910 18:34:59.899077   10084 fix.go:229] Guest: 2024-09-10 18:35:00.120154858 +0000 UTC Remote: 2024-09-10 18:34:55.5932142 +0000 UTC m=+117.432237401 (delta=4.526940658s)
	I0910 18:34:59.899077   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:35:01.840804   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:35:01.840804   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:01.841002   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:35:04.167790   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:35:04.167868   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:04.171698   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:35:04.171768   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.216.168 22 <nil> <nil>}
	I0910 18:35:04.171768   10084 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1725993299
	I0910 18:35:04.311304   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Sep 10 18:34:59 UTC 2024
	
	I0910 18:35:04.311371   10084 fix.go:236] clock set: Tue Sep 10 18:34:59 UTC 2024
	 (err=<nil>)
	I0910 18:35:04.311371   10084 start.go:83] releasing machines lock for "ha-301400", held for 2m1.1615603s
	I0910 18:35:04.311598   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:35:06.304568   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:35:06.304568   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:06.305027   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:35:08.593147   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:35:08.593147   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:08.596476   10084 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0910 18:35:08.596571   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:35:08.603667   10084 ssh_runner.go:195] Run: cat /version.json
	I0910 18:35:08.603667   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:35:10.578401   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:35:10.578476   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:10.578549   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:35:10.604983   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:35:10.604983   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:10.604983   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:35:12.969630   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:35:12.969630   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:12.970437   10084 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 18:35:13.042074   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:35:13.042307   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:13.042568   10084 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 18:35:13.085116   10084 ssh_runner.go:235] Completed: cat /version.json: (4.4811474s)
	I0910 18:35:13.099377   10084 ssh_runner.go:195] Run: systemctl --version
	I0910 18:35:13.104988   10084 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.5082087s)
	W0910 18:35:13.105117   10084 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0910 18:35:13.124602   10084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:35:13.134090   10084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:35:13.143734   10084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:35:13.172614   10084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:35:13.172614   10084 start.go:495] detecting cgroup driver to use...
	I0910 18:35:13.172614   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:35:13.219389   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 18:35:13.247774   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 18:35:13.266283   10084 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	W0910 18:35:13.272436   10084 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0910 18:35:13.272436   10084 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0910 18:35:13.276274   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 18:35:13.305085   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 18:35:13.334448   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 18:35:13.361519   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 18:35:13.390882   10084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:35:13.420360   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 18:35:13.449104   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 18:35:13.476668   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 18:35:13.506878   10084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:35:13.533166   10084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:35:13.562006   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:35:13.769308   10084 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 18:35:13.800135   10084 start.go:495] detecting cgroup driver to use...
	I0910 18:35:13.810589   10084 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 18:35:13.839428   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:35:13.870320   10084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:35:13.906220   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:35:13.940595   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 18:35:13.972586   10084 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 18:35:14.030701   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 18:35:14.052989   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:35:14.099603   10084 ssh_runner.go:195] Run: which cri-dockerd
	I0910 18:35:14.113917   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 18:35:14.130926   10084 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 18:35:14.173663   10084 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 18:35:14.357458   10084 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 18:35:14.530732   10084 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 18:35:14.531024   10084 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 18:35:14.571764   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:35:14.760265   10084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 18:35:17.320078   10084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5596409s)
	I0910 18:35:17.333028   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 18:35:17.366618   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 18:35:17.402235   10084 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 18:35:17.589395   10084 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 18:35:17.777185   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:35:17.953962   10084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 18:35:17.992732   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 18:35:18.024684   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:35:18.208533   10084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 18:35:18.309370   10084 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 18:35:18.319958   10084 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 18:35:18.328202   10084 start.go:563] Will wait 60s for crictl version
	I0910 18:35:18.337659   10084 ssh_runner.go:195] Run: which crictl
	I0910 18:35:18.351666   10084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:35:18.401247   10084 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0910 18:35:18.409151   10084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 18:35:18.447244   10084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 18:35:18.484687   10084 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0910 18:35:18.484687   10084 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0910 18:35:18.489470   10084 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0910 18:35:18.489470   10084 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0910 18:35:18.489470   10084 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0910 18:35:18.489470   10084 ip.go:211] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:36:e6 Flags:up|broadcast|multicast|running}
	I0910 18:35:18.494481   10084 ip.go:214] interface addr: fe80::bc85:cd58:cadb:2533/64
	I0910 18:35:18.494481   10084 ip.go:214] interface addr: 172.31.208.1/20
	I0910 18:35:18.502470   10084 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0910 18:35:18.508477   10084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:35:18.540545   10084 kubeadm.go:883] updating cluster {Name:ha-301400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:ha-301400 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.216.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:35:18.540545   10084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 18:35:18.548562   10084 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 18:35:18.567569   10084 docker.go:685] Got preloaded images: 
	I0910 18:35:18.568546   10084 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0910 18:35:18.579665   10084 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 18:35:18.608957   10084 ssh_runner.go:195] Run: which lz4
	I0910 18:35:18.614459   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0910 18:35:18.623268   10084 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 18:35:18.629299   10084 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 18:35:18.629924   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0910 18:35:19.848393   10084 docker.go:649] duration metric: took 1.2338511s to copy over tarball
	I0910 18:35:19.859594   10084 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 18:35:28.631052   10084 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.7707755s)
	I0910 18:35:28.631185   10084 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 18:35:28.687015   10084 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 18:35:28.702230   10084 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0910 18:35:28.745365   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:35:28.948578   10084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 18:35:31.963149   10084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.0142602s)
	I0910 18:35:31.972782   10084 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 18:35:31.997624   10084 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0910 18:35:31.998355   10084 cache_images.go:84] Images are preloaded, skipping loading
	I0910 18:35:31.998355   10084 kubeadm.go:934] updating node { 172.31.216.168 8443 v1.31.0 docker true true} ...
	I0910 18:35:31.998355   10084 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-301400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.216.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-301400 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:35:32.005332   10084 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0910 18:35:32.068177   10084 cni.go:84] Creating CNI manager for ""
	I0910 18:35:32.068177   10084 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0910 18:35:32.068267   10084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:35:32.068267   10084 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.31.216.168 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-301400 NodeName:ha-301400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.31.216.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.31.216.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:35:32.068507   10084 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.31.216.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-301400"
	  kubeletExtraArgs:
	    node-ip: 172.31.216.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.31.216.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:35:32.068574   10084 kube-vip.go:115] generating kube-vip config ...
	I0910 18:35:32.075577   10084 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0910 18:35:32.100835   10084 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0910 18:35:32.101142   10084 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.31.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0910 18:35:32.112016   10084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:35:32.132106   10084 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:35:32.145182   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0910 18:35:32.161502   10084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0910 18:35:32.186513   10084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:35:32.214312   10084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0910 18:35:32.243944   10084 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0910 18:35:32.279881   10084 ssh_runner.go:195] Run: grep 172.31.223.254	control-plane.minikube.internal$ /etc/hosts
	I0910 18:35:32.285155   10084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:35:32.318149   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:35:32.493304   10084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:35:32.517188   10084 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400 for IP: 172.31.216.168
	I0910 18:35:32.517267   10084 certs.go:194] generating shared ca certs ...
	I0910 18:35:32.517267   10084 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:35:32.518138   10084 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0910 18:35:32.518805   10084 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0910 18:35:32.519039   10084 certs.go:256] generating profile certs ...
	I0910 18:35:32.519462   10084 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\client.key
	I0910 18:35:32.519462   10084 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\client.crt with IP's: []
	I0910 18:35:32.619052   10084 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\client.crt ...
	I0910 18:35:32.619052   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\client.crt: {Name:mk6e209ff46848639d08a2ede17d2fe10608f8a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:35:32.620690   10084 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\client.key ...
	I0910 18:35:32.620690   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\client.key: {Name:mk7e408047fe704d36613f0fda393cd3b1d4780b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:35:32.622332   10084 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.35399713
	I0910 18:35:32.622992   10084 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.35399713 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.216.168 172.31.223.254]
	I0910 18:35:32.683183   10084 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.35399713 ...
	I0910 18:35:32.683183   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.35399713: {Name:mk8d43549b50f7b266ef84c1b6f9fa794218fdeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:35:32.684108   10084 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.35399713 ...
	I0910 18:35:32.684108   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.35399713: {Name:mk959b47b63eed73b48541c01794c05704d529ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:35:32.685028   10084 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.35399713 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt
	I0910 18:35:32.698217   10084 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.35399713 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key
	I0910 18:35:32.698459   10084 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key
	I0910 18:35:32.698459   10084 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt with IP's: []
	I0910 18:35:33.033172   10084 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt ...
	I0910 18:35:33.033172   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt: {Name:mk2ab2d2c77377541aa5c23e4746f768eba7b54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:35:33.035001   10084 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key ...
	I0910 18:35:33.035001   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key: {Name:mkbdae0437ba8e6b1f313e812ff49f0f860c322a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:35:33.036119   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 18:35:33.036338   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0910 18:35:33.036338   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 18:35:33.036637   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 18:35:33.036637   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 18:35:33.036834   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 18:35:33.036968   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 18:35:33.048165   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 18:35:33.054439   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem (1338 bytes)
	W0910 18:35:33.055254   10084 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724_empty.pem, impossibly tiny 0 bytes
	I0910 18:35:33.055254   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0910 18:35:33.055719   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0910 18:35:33.055719   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0910 18:35:33.055719   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0910 18:35:33.056909   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem (1708 bytes)
	I0910 18:35:33.057199   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:35:33.057373   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem -> /usr/share/ca-certificates/4724.pem
	I0910 18:35:33.057428   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /usr/share/ca-certificates/47242.pem
	I0910 18:35:33.059265   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:35:33.104096   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0910 18:35:33.143835   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:35:33.186353   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 18:35:33.229061   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0910 18:35:33.271231   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 18:35:33.310221   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:35:33.351760   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 18:35:33.392825   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:35:33.433978   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem --> /usr/share/ca-certificates/4724.pem (1338 bytes)
	I0910 18:35:33.474761   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /usr/share/ca-certificates/47242.pem (1708 bytes)
	I0910 18:35:33.513401   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:35:33.552248   10084 ssh_runner.go:195] Run: openssl version
	I0910 18:35:33.570302   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:35:33.596948   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:35:33.605775   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:35:33.613487   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:35:33.629302   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:35:33.654382   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4724.pem && ln -fs /usr/share/ca-certificates/4724.pem /etc/ssl/certs/4724.pem"
	I0910 18:35:33.680723   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4724.pem
	I0910 18:35:33.686478   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 18:35:33.697449   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4724.pem
	I0910 18:35:33.717447   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4724.pem /etc/ssl/certs/51391683.0"
	I0910 18:35:33.743792   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47242.pem && ln -fs /usr/share/ca-certificates/47242.pem /etc/ssl/certs/47242.pem"
	I0910 18:35:33.769272   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47242.pem
	I0910 18:35:33.776096   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 18:35:33.788264   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47242.pem
	I0910 18:35:33.808819   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47242.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:35:33.835567   10084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:35:33.840842   10084 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 18:35:33.840979   10084 kubeadm.go:392] StartCluster: {Name:ha-301400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:ha-301400 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.216.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:35:33.849744   10084 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 18:35:33.881888   10084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:35:33.907329   10084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:35:33.932261   10084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:35:33.948080   10084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:35:33.948141   10084 kubeadm.go:157] found existing configuration files:
	
	I0910 18:35:33.960542   10084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 18:35:33.978102   10084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:35:33.987549   10084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:35:34.017847   10084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 18:35:34.034159   10084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:35:34.044953   10084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:35:34.074945   10084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 18:35:34.091855   10084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:35:34.099510   10084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:35:34.127044   10084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 18:35:34.142281   10084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:35:34.152069   10084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:35:34.167826   10084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 18:35:34.344030   10084 kubeadm.go:310] W0910 18:35:34.568154    1776 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 18:35:34.345363   10084 kubeadm.go:310] W0910 18:35:34.569114    1776 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 18:35:34.469561   10084 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 18:35:46.751279   10084 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 18:35:46.751344   10084 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 18:35:46.751344   10084 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 18:35:46.751948   10084 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 18:35:46.751948   10084 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 18:35:46.751948   10084 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 18:35:46.756125   10084 out.go:235]   - Generating certificates and keys ...
	I0910 18:35:46.756192   10084 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 18:35:46.756192   10084 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 18:35:46.756712   10084 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 18:35:46.756842   10084 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0910 18:35:46.756842   10084 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0910 18:35:46.757368   10084 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0910 18:35:46.757543   10084 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0910 18:35:46.757543   10084 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-301400 localhost] and IPs [172.31.216.168 127.0.0.1 ::1]
	I0910 18:35:46.757543   10084 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0910 18:35:46.758226   10084 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-301400 localhost] and IPs [172.31.216.168 127.0.0.1 ::1]
	I0910 18:35:46.758226   10084 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 18:35:46.758226   10084 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 18:35:46.758901   10084 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0910 18:35:46.758958   10084 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 18:35:46.758958   10084 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 18:35:46.758958   10084 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 18:35:46.758958   10084 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 18:35:46.759678   10084 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 18:35:46.759949   10084 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 18:35:46.760237   10084 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 18:35:46.760460   10084 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 18:35:46.764579   10084 out.go:235]   - Booting up control plane ...
	I0910 18:35:46.764828   10084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 18:35:46.764964   10084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 18:35:46.765125   10084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 18:35:46.765384   10084 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 18:35:46.765618   10084 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 18:35:46.765765   10084 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 18:35:46.766130   10084 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 18:35:46.766455   10084 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 18:35:46.766677   10084 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002403393s
	I0910 18:35:46.766898   10084 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 18:35:46.767025   10084 kubeadm.go:310] [api-check] The API server is healthy after 6.506476091s
	I0910 18:35:46.767346   10084 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 18:35:46.767717   10084 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 18:35:46.767847   10084 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 18:35:46.768235   10084 kubeadm.go:310] [mark-control-plane] Marking the node ha-301400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 18:35:46.768421   10084 kubeadm.go:310] [bootstrap-token] Using token: bhmj7u.h9zw5qeczq3tbkjr
	I0910 18:35:46.770241   10084 out.go:235]   - Configuring RBAC rules ...
	I0910 18:35:46.770241   10084 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 18:35:46.771245   10084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 18:35:46.771245   10084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 18:35:46.771245   10084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 18:35:46.771245   10084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 18:35:46.771245   10084 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 18:35:46.772409   10084 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 18:35:46.772537   10084 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 18:35:46.772607   10084 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 18:35:46.772694   10084 kubeadm.go:310] 
	I0910 18:35:46.772694   10084 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 18:35:46.772694   10084 kubeadm.go:310] 
	I0910 18:35:46.772945   10084 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 18:35:46.773017   10084 kubeadm.go:310] 
	I0910 18:35:46.773175   10084 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 18:35:46.773354   10084 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 18:35:46.773501   10084 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 18:35:46.773542   10084 kubeadm.go:310] 
	I0910 18:35:46.773715   10084 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 18:35:46.773786   10084 kubeadm.go:310] 
	I0910 18:35:46.773930   10084 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 18:35:46.773962   10084 kubeadm.go:310] 
	I0910 18:35:46.774058   10084 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 18:35:46.774205   10084 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 18:35:46.774359   10084 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 18:35:46.774359   10084 kubeadm.go:310] 
	I0910 18:35:46.774606   10084 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 18:35:46.774891   10084 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 18:35:46.774972   10084 kubeadm.go:310] 
	I0910 18:35:46.775110   10084 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bhmj7u.h9zw5qeczq3tbkjr \
	I0910 18:35:46.775541   10084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b \
	I0910 18:35:46.775687   10084 kubeadm.go:310] 	--control-plane 
	I0910 18:35:46.775687   10084 kubeadm.go:310] 
	I0910 18:35:46.775824   10084 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 18:35:46.775928   10084 kubeadm.go:310] 
	I0910 18:35:46.776141   10084 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bhmj7u.h9zw5qeczq3tbkjr \
	I0910 18:35:46.776494   10084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b 
	I0910 18:35:46.776539   10084 cni.go:84] Creating CNI manager for ""
	I0910 18:35:46.776539   10084 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0910 18:35:46.781320   10084 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0910 18:35:46.795596   10084 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0910 18:35:46.802679   10084 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0910 18:35:46.802679   10084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0910 18:35:46.852139   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0910 18:35:47.360055   10084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 18:35:47.375182   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:35:47.375182   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-301400 minikube.k8s.io/updated_at=2024_09_10T18_35_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=ha-301400 minikube.k8s.io/primary=true
	I0910 18:35:47.443123   10084 ops.go:34] apiserver oom_adj: -16
	I0910 18:35:47.693785   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:35:48.205844   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:35:48.708548   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:35:49.214699   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:35:49.695389   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:35:50.195868   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:35:50.703317   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:35:51.210202   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:35:51.698886   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:35:51.841318   10084 kubeadm.go:1113] duration metric: took 4.4809609s to wait for elevateKubeSystemPrivileges
	I0910 18:35:51.841480   10084 kubeadm.go:394] duration metric: took 17.9992874s to StartCluster
	I0910 18:35:51.841506   10084 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:35:51.841748   10084 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:35:51.843901   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:35:51.845269   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0910 18:35:51.845269   10084 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.31.216.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 18:35:51.845269   10084 start.go:241] waiting for startup goroutines ...
	I0910 18:35:51.845269   10084 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 18:35:51.845545   10084 addons.go:69] Setting storage-provisioner=true in profile "ha-301400"
	I0910 18:35:51.845545   10084 addons.go:69] Setting default-storageclass=true in profile "ha-301400"
	I0910 18:35:51.845545   10084 addons.go:234] Setting addon storage-provisioner=true in "ha-301400"
	I0910 18:35:51.845545   10084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-301400"
	I0910 18:35:51.845633   10084 host.go:66] Checking if "ha-301400" exists ...
	I0910 18:35:51.846293   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:35:51.846942   10084 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:35:51.849542   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:35:52.043793   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.31.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0910 18:35:52.450549   10084 start.go:971] {"host.minikube.internal": 172.31.208.1} host record injected into CoreDNS's ConfigMap
	I0910 18:35:53.876839   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:35:53.877508   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:53.877508   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:35:53.877508   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:53.878409   10084 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:35:53.879100   10084 kapi.go:59] client config for ha-301400: &rest.Config{Host:"https://172.31.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-301400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-301400\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 18:35:53.880292   10084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:35:53.880369   10084 cert_rotation.go:140] Starting client certificate rotation controller
	I0910 18:35:53.880704   10084 addons.go:234] Setting addon default-storageclass=true in "ha-301400"
	I0910 18:35:53.880774   10084 host.go:66] Checking if "ha-301400" exists ...
	I0910 18:35:53.881844   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:35:53.882468   10084 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 18:35:53.882468   10084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 18:35:53.882468   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:35:55.943846   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:35:55.943846   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:55.943914   10084 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 18:35:55.943914   10084 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 18:35:55.943914   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:35:56.003436   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:35:56.003436   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:56.004057   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:35:57.959888   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:35:57.959888   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:57.959888   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:35:58.391426   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:35:58.392037   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:58.392362   10084 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 18:35:58.527832   10084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 18:35:59.541988   10084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.0137212s)
	I0910 18:36:00.261175   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:36:00.261392   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:00.261944   10084 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 18:36:00.399850   10084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 18:36:00.535037   10084 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0910 18:36:00.535037   10084 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0910 18:36:00.535037   10084 round_trippers.go:463] GET https://172.31.223.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0910 18:36:00.535037   10084 round_trippers.go:469] Request Headers:
	I0910 18:36:00.535037   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:36:00.535037   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:36:00.547040   10084 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0910 18:36:00.547784   10084 round_trippers.go:463] PUT https://172.31.223.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0910 18:36:00.547784   10084 round_trippers.go:469] Request Headers:
	I0910 18:36:00.547784   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:36:00.547784   10084 round_trippers.go:473]     Content-Type: application/json
	I0910 18:36:00.547784   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:36:00.550337   10084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:36:00.555011   10084 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0910 18:36:00.557571   10084 addons.go:510] duration metric: took 8.7117141s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0910 18:36:00.557571   10084 start.go:246] waiting for cluster config update ...
	I0910 18:36:00.557571   10084 start.go:255] writing updated cluster config ...
	I0910 18:36:00.560558   10084 out.go:201] 
	I0910 18:36:00.576139   10084 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:36:00.576139   10084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json ...
	I0910 18:36:00.581161   10084 out.go:177] * Starting "ha-301400-m02" control-plane node in "ha-301400" cluster
	I0910 18:36:00.585141   10084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 18:36:00.585871   10084 cache.go:56] Caching tarball of preloaded images
	I0910 18:36:00.585871   10084 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0910 18:36:00.586411   10084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 18:36:00.586529   10084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json ...
	I0910 18:36:00.590867   10084 start.go:360] acquireMachinesLock for ha-301400-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:36:00.591493   10084 start.go:364] duration metric: took 544.9µs to acquireMachinesLock for "ha-301400-m02"
	I0910 18:36:00.591632   10084 start.go:93] Provisioning new machine with config: &{Name:ha-301400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.0 ClusterName:ha-301400 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.216.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 18:36:00.591737   10084 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0910 18:36:00.596945   10084 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 18:36:00.597489   10084 start.go:159] libmachine.API.Create for "ha-301400" (driver="hyperv")
	I0910 18:36:00.597489   10084 client.go:168] LocalClient.Create starting
	I0910 18:36:00.597657   10084 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0910 18:36:00.597995   10084 main.go:141] libmachine: Decoding PEM data...
	I0910 18:36:00.597995   10084 main.go:141] libmachine: Parsing certificate...
	I0910 18:36:00.597995   10084 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0910 18:36:00.597995   10084 main.go:141] libmachine: Decoding PEM data...
	I0910 18:36:00.597995   10084 main.go:141] libmachine: Parsing certificate...
	I0910 18:36:00.597995   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0910 18:36:02.354685   10084 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0910 18:36:02.354685   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:02.354984   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0910 18:36:03.931473   10084 main.go:141] libmachine: [stdout =====>] : False
	
	I0910 18:36:03.931473   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:03.931678   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0910 18:36:05.302753   10084 main.go:141] libmachine: [stdout =====>] : True
	
	I0910 18:36:05.302753   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:05.303074   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0910 18:36:08.547454   10084 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0910 18:36:08.548202   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:08.550444   10084 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 18:36:08.909602   10084 main.go:141] libmachine: Creating SSH key...
	I0910 18:36:09.024917   10084 main.go:141] libmachine: Creating VM...
	I0910 18:36:09.024917   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0910 18:36:11.579530   10084 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0910 18:36:11.579530   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:11.579958   10084 main.go:141] libmachine: Using switch "Default Switch"
	I0910 18:36:11.580146   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0910 18:36:13.167653   10084 main.go:141] libmachine: [stdout =====>] : True
	
	I0910 18:36:13.167653   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:13.167653   10084 main.go:141] libmachine: Creating VHD
	I0910 18:36:13.167653   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0910 18:36:16.682065   10084 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 814E0FBE-6EB7-4615-9B41-C4E05E66854C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0910 18:36:16.682808   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:16.682808   10084 main.go:141] libmachine: Writing magic tar header
	I0910 18:36:16.682808   10084 main.go:141] libmachine: Writing SSH key tar header
	I0910 18:36:16.692879   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0910 18:36:19.601181   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:19.601181   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:19.601249   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\disk.vhd' -SizeBytes 20000MB
	I0910 18:36:21.924335   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:21.924335   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:21.924967   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-301400-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0910 18:36:25.171518   10084 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-301400-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0910 18:36:25.171518   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:25.171518   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-301400-m02 -DynamicMemoryEnabled $false
	I0910 18:36:27.150523   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:27.150523   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:27.151253   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-301400-m02 -Count 2
	I0910 18:36:29.082156   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:29.082156   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:29.082236   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-301400-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\boot2docker.iso'
	I0910 18:36:31.363096   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:31.363096   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:31.363684   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-301400-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\disk.vhd'
	I0910 18:36:33.648824   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:33.648824   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:33.649881   10084 main.go:141] libmachine: Starting VM...
	I0910 18:36:33.649881   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-301400-m02
	I0910 18:36:36.321183   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:36.322292   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:36.322292   10084 main.go:141] libmachine: Waiting for host to start...
	I0910 18:36:36.322292   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:36:38.308004   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:36:38.309046   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:38.309046   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:36:40.534285   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:40.535293   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:41.544482   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:36:43.461742   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:36:43.461742   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:43.461742   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:36:45.669294   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:45.669369   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:46.683041   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:36:48.621635   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:36:48.621635   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:48.621635   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:36:50.842083   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:50.842265   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:51.844086   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:36:53.782081   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:36:53.782081   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:53.783065   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:36:55.969729   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:55.969729   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:56.972359   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:36:58.911237   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:36:58.911237   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:58.911879   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:01.178646   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:01.178646   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:01.178646   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:03.090108   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:03.091074   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:03.091074   10084 machine.go:93] provisionDockerMachine start ...
	I0910 18:37:03.091074   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:04.991563   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:04.991563   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:04.991925   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:07.235599   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:07.235599   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:07.239473   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:37:07.252765   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.2 22 <nil> <nil>}
	I0910 18:37:07.252765   10084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:37:07.384465   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:37:07.384465   10084 buildroot.go:166] provisioning hostname "ha-301400-m02"
	I0910 18:37:07.384465   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:09.224663   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:09.225415   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:09.225487   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:11.405348   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:11.405348   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:11.409743   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:37:11.410369   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.2 22 <nil> <nil>}
	I0910 18:37:11.410369   10084 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-301400-m02 && echo "ha-301400-m02" | sudo tee /etc/hostname
	I0910 18:37:11.568700   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-301400-m02
	
	I0910 18:37:11.568700   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:13.445214   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:13.445214   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:13.445214   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:15.650522   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:15.650859   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:15.654490   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:37:15.655083   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.2 22 <nil> <nil>}
	I0910 18:37:15.655083   10084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-301400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-301400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-301400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:37:15.795222   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:37:15.795256   10084 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0910 18:37:15.795311   10084 buildroot.go:174] setting up certificates
	I0910 18:37:15.795345   10084 provision.go:84] configureAuth start
	I0910 18:37:15.795345   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:17.656433   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:17.656433   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:17.656694   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:19.851896   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:19.851896   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:19.851896   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:21.669271   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:21.669681   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:21.669681   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:23.871072   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:23.871072   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:23.871072   10084 provision.go:143] copyHostCerts
	I0910 18:37:23.871072   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0910 18:37:23.871072   10084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0910 18:37:23.871072   10084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0910 18:37:23.871756   10084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0910 18:37:23.872444   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0910 18:37:23.872444   10084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0910 18:37:23.872444   10084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0910 18:37:23.873022   10084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0910 18:37:23.873214   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0910 18:37:23.873792   10084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0910 18:37:23.873792   10084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0910 18:37:23.873792   10084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0910 18:37:23.874774   10084 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-301400-m02 san=[127.0.0.1 172.31.215.2 ha-301400-m02 localhost minikube]
	I0910 18:37:24.127406   10084 provision.go:177] copyRemoteCerts
	I0910 18:37:24.134401   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:37:24.135401   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:25.970303   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:25.970465   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:25.970465   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:28.225606   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:28.226345   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:28.226505   10084 sshutil.go:53] new ssh client: &{IP:172.31.215.2 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\id_rsa Username:docker}
	I0910 18:37:28.325954   10084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1912691s)
	I0910 18:37:28.325954   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0910 18:37:28.325954   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 18:37:28.367868   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0910 18:37:28.367868   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 18:37:28.411528   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0910 18:37:28.411528   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0910 18:37:28.456947   10084 provision.go:87] duration metric: took 12.6607445s to configureAuth
	I0910 18:37:28.457029   10084 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:37:28.457518   10084 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:37:28.457586   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:30.292757   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:30.292757   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:30.292757   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:32.491164   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:32.491164   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:32.496919   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:37:32.496919   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.2 22 <nil> <nil>}
	I0910 18:37:32.496919   10084 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 18:37:32.634489   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 18:37:32.634541   10084 buildroot.go:70] root file system type: tmpfs
	I0910 18:37:32.634657   10084 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 18:37:32.634748   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:34.493151   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:34.493227   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:34.493227   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:36.690600   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:36.691031   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:36.694912   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:37:36.695495   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.2 22 <nil> <nil>}
	I0910 18:37:36.695495   10084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.31.216.168"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 18:37:36.851219   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.31.216.168
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 18:37:36.851219   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:38.689860   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:38.689992   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:38.689992   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:40.962337   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:40.962337   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:40.966678   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:37:40.967195   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.2 22 <nil> <nil>}
	I0910 18:37:40.967286   10084 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 18:37:43.154486   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 18:37:43.154569   10084 machine.go:96] duration metric: took 40.0607822s to provisionDockerMachine
	I0910 18:37:43.154615   10084 client.go:171] duration metric: took 1m42.5501876s to LocalClient.Create
	I0910 18:37:43.154615   10084 start.go:167] duration metric: took 1m42.5501876s to libmachine.API.Create "ha-301400"
	I0910 18:37:43.154678   10084 start.go:293] postStartSetup for "ha-301400-m02" (driver="hyperv")
	I0910 18:37:43.154678   10084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:37:43.163469   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:37:43.163469   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:45.054788   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:45.054788   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:45.055560   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:47.328775   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:47.329201   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:47.329539   10084 sshutil.go:53] new ssh client: &{IP:172.31.215.2 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\id_rsa Username:docker}
	I0910 18:37:47.432031   10084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2682198s)
	I0910 18:37:47.440811   10084 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:37:47.447311   10084 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:37:47.447401   10084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0910 18:37:47.447856   10084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0910 18:37:47.449223   10084 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> 47242.pem in /etc/ssl/certs
	I0910 18:37:47.449280   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /etc/ssl/certs/47242.pem
	I0910 18:37:47.458778   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:37:47.475446   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /etc/ssl/certs/47242.pem (1708 bytes)
	I0910 18:37:47.517035   10084 start.go:296] duration metric: took 4.3620618s for postStartSetup
	I0910 18:37:47.519063   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:49.438073   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:49.438141   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:49.438141   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:51.718923   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:51.718923   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:51.719856   10084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json ...
	I0910 18:37:51.724381   10084 start.go:128] duration metric: took 1m51.125124s to createHost
	I0910 18:37:51.724381   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:53.608867   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:53.609863   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:53.610093   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:55.818683   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:55.818683   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:55.823571   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:37:55.823851   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.2 22 <nil> <nil>}
	I0910 18:37:55.823851   10084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:37:55.956107   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725993476.176233482
	
	I0910 18:37:55.956163   10084 fix.go:216] guest clock: 1725993476.176233482
	I0910 18:37:55.956217   10084 fix.go:229] Guest: 2024-09-10 18:37:56.176233482 +0000 UTC Remote: 2024-09-10 18:37:51.7243813 +0000 UTC m=+293.551507901 (delta=4.451852182s)
	I0910 18:37:55.956282   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:57.784323   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:57.784323   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:57.784405   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:38:00.012292   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:38:00.012292   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:00.016253   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:38:00.016686   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.2 22 <nil> <nil>}
	I0910 18:38:00.016686   10084 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1725993475
	I0910 18:38:00.156374   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Sep 10 18:37:55 UTC 2024
	
	I0910 18:38:00.156374   10084 fix.go:236] clock set: Tue Sep 10 18:37:55 UTC 2024
	 (err=<nil>)
	I0910 18:38:00.156374   10084 start.go:83] releasing machines lock for "ha-301400-m02", held for 1m59.5567897s
	I0910 18:38:00.157100   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:38:02.066216   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:38:02.066216   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:02.066662   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:38:04.308018   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:38:04.308988   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:04.311861   10084 out.go:177] * Found network options:
	I0910 18:38:04.315050   10084 out.go:177]   - NO_PROXY=172.31.216.168
	W0910 18:38:04.317623   10084 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 18:38:04.319363   10084 out.go:177]   - NO_PROXY=172.31.216.168
	W0910 18:38:04.321363   10084 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 18:38:04.322333   10084 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 18:38:04.324338   10084 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0910 18:38:04.324338   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:38:04.331209   10084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0910 18:38:04.331209   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:38:06.240949   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:38:06.240949   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:06.240949   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:38:06.256564   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:38:06.256783   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:06.256783   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:38:08.556797   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:38:08.556797   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:08.557667   10084 sshutil.go:53] new ssh client: &{IP:172.31.215.2 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\id_rsa Username:docker}
	I0910 18:38:08.584826   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:38:08.585033   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:08.585414   10084 sshutil.go:53] new ssh client: &{IP:172.31.215.2 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\id_rsa Username:docker}
	I0910 18:38:08.649636   10084 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.3181349s)
	W0910 18:38:08.649753   10084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:38:08.662101   10084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:38:08.667518   10084 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.3428851s)
	W0910 18:38:08.667616   10084 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0910 18:38:08.698714   10084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:38:08.698714   10084 start.go:495] detecting cgroup driver to use...
	I0910 18:38:08.699867   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:38:08.741544   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 18:38:08.768959   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 18:38:08.788005   10084 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	W0910 18:38:08.793105   10084 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0910 18:38:08.793174   10084 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0910 18:38:08.798493   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 18:38:08.825532   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 18:38:08.853387   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 18:38:08.880006   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 18:38:08.912139   10084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:38:08.946233   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 18:38:08.974418   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 18:38:09.002644   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 18:38:09.028635   10084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:38:09.053681   10084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:38:09.079657   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:38:09.264390   10084 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 18:38:09.294029   10084 start.go:495] detecting cgroup driver to use...
	I0910 18:38:09.303772   10084 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 18:38:09.331696   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:38:09.358595   10084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:38:09.393007   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:38:09.422563   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 18:38:09.451892   10084 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 18:38:09.518028   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 18:38:09.539741   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:38:09.577573   10084 ssh_runner.go:195] Run: which cri-dockerd
	I0910 18:38:09.591379   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 18:38:09.608979   10084 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 18:38:09.649897   10084 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 18:38:09.826883   10084 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 18:38:10.000014   10084 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 18:38:10.000014   10084 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 18:38:10.038845   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:38:10.216981   10084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 18:38:12.760934   10084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5437805s)
	I0910 18:38:12.769842   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 18:38:12.800491   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 18:38:12.831646   10084 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 18:38:13.039037   10084 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 18:38:13.221980   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:38:13.408088   10084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 18:38:13.446968   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 18:38:13.478243   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:38:13.670441   10084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 18:38:13.768009   10084 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 18:38:13.781020   10084 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 18:38:13.791077   10084 start.go:563] Will wait 60s for crictl version
	I0910 18:38:13.799478   10084 ssh_runner.go:195] Run: which crictl
	I0910 18:38:13.814704   10084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:38:13.864152   10084 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0910 18:38:13.870330   10084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 18:38:13.909779   10084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 18:38:13.944492   10084 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0910 18:38:13.948389   10084 out.go:177]   - env NO_PROXY=172.31.216.168
	I0910 18:38:13.952559   10084 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0910 18:38:13.956612   10084 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0910 18:38:13.956612   10084 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0910 18:38:13.956612   10084 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0910 18:38:13.956612   10084 ip.go:211] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:36:e6 Flags:up|broadcast|multicast|running}
	I0910 18:38:13.958613   10084 ip.go:214] interface addr: fe80::bc85:cd58:cadb:2533/64
	I0910 18:38:13.958613   10084 ip.go:214] interface addr: 172.31.208.1/20
	I0910 18:38:13.966606   10084 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0910 18:38:13.972600   10084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:38:13.994682   10084 mustload.go:65] Loading cluster: ha-301400
	I0910 18:38:13.995020   10084 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:38:13.995617   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:38:15.850580   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:38:15.851580   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:15.851580   10084 host.go:66] Checking if "ha-301400" exists ...
	I0910 18:38:15.851655   10084 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400 for IP: 172.31.215.2
	I0910 18:38:15.852183   10084 certs.go:194] generating shared ca certs ...
	I0910 18:38:15.852183   10084 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:38:15.852674   10084 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0910 18:38:15.852953   10084 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0910 18:38:15.853214   10084 certs.go:256] generating profile certs ...
	I0910 18:38:15.853667   10084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\client.key
	I0910 18:38:15.853736   10084 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.0fcf09ca
	I0910 18:38:15.853736   10084 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.0fcf09ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.216.168 172.31.215.2 172.31.223.254]
	I0910 18:38:15.943090   10084 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.0fcf09ca ...
	I0910 18:38:15.943090   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.0fcf09ca: {Name:mk90cceaad2b5f522282c93ac88ee15814df3b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:38:15.944091   10084 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.0fcf09ca ...
	I0910 18:38:15.944091   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.0fcf09ca: {Name:mk2ace6151b6c6d84e8d5658fc9d86dd8b3b0160 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:38:15.944875   10084 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.0fcf09ca -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt
	I0910 18:38:15.960402   10084 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.0fcf09ca -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key
	I0910 18:38:15.961040   10084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key
	I0910 18:38:15.961040   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 18:38:15.961040   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0910 18:38:15.961794   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 18:38:15.961794   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 18:38:15.961794   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 18:38:15.961794   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 18:38:15.962664   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 18:38:15.962664   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 18:38:15.963276   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem (1338 bytes)
	W0910 18:38:15.963544   10084 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724_empty.pem, impossibly tiny 0 bytes
	I0910 18:38:15.963624   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0910 18:38:15.963624   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0910 18:38:15.963624   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0910 18:38:15.964346   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0910 18:38:15.964567   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem (1708 bytes)
	I0910 18:38:15.964567   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem -> /usr/share/ca-certificates/4724.pem
	I0910 18:38:15.964567   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /usr/share/ca-certificates/47242.pem
	I0910 18:38:15.965178   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:38:15.965366   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:38:17.849116   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:38:17.849206   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:17.849318   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:38:20.112760   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:38:20.112760   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:20.113139   10084 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 18:38:20.214460   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0910 18:38:20.222246   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0910 18:38:20.248316   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0910 18:38:20.254920   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0910 18:38:20.281861   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0910 18:38:20.287796   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0910 18:38:20.315614   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0910 18:38:20.321748   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0910 18:38:20.347275   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0910 18:38:20.354561   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0910 18:38:20.386090   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0910 18:38:20.392140   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0910 18:38:20.415668   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:38:20.463420   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0910 18:38:20.509240   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:38:20.560102   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 18:38:20.601783   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0910 18:38:20.644388   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 18:38:20.683772   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:38:20.732725   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 18:38:20.775054   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem --> /usr/share/ca-certificates/4724.pem (1338 bytes)
	I0910 18:38:20.821547   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /usr/share/ca-certificates/47242.pem (1708 bytes)
	I0910 18:38:20.865075   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:38:20.909022   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0910 18:38:20.942265   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0910 18:38:20.970270   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0910 18:38:20.999013   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0910 18:38:21.030763   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0910 18:38:21.060134   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0910 18:38:21.092915   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0910 18:38:21.132616   10084 ssh_runner.go:195] Run: openssl version
	I0910 18:38:21.149089   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47242.pem && ln -fs /usr/share/ca-certificates/47242.pem /etc/ssl/certs/47242.pem"
	I0910 18:38:21.174779   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47242.pem
	I0910 18:38:21.181189   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 18:38:21.190651   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47242.pem
	I0910 18:38:21.208365   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47242.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:38:21.235703   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:38:21.262463   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:38:21.269162   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:38:21.277268   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:38:21.294222   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:38:21.320003   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4724.pem && ln -fs /usr/share/ca-certificates/4724.pem /etc/ssl/certs/4724.pem"
	I0910 18:38:21.349109   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4724.pem
	I0910 18:38:21.355518   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 18:38:21.362786   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4724.pem
	I0910 18:38:21.377997   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4724.pem /etc/ssl/certs/51391683.0"
	I0910 18:38:21.405179   10084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:38:21.412360   10084 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 18:38:21.412559   10084 kubeadm.go:934] updating node {m02 172.31.215.2 8443 v1.31.0 docker true true} ...
	I0910 18:38:21.412761   10084 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-301400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.215.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-301400 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:38:21.412799   10084 kube-vip.go:115] generating kube-vip config ...
	I0910 18:38:21.420075   10084 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0910 18:38:21.444403   10084 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0910 18:38:21.445400   10084 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.31.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0910 18:38:21.456402   10084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:38:21.472576   10084 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0910 18:38:21.480541   10084 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0910 18:38:21.499478   10084 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubectl
	I0910 18:38:21.500097   10084 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubeadm
	I0910 18:38:21.500097   10084 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubelet
	I0910 18:38:22.547989   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0910 18:38:22.556519   10084 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0910 18:38:22.564079   10084 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0910 18:38:22.564370   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0910 18:38:22.634718   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0910 18:38:22.647722   10084 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0910 18:38:22.683884   10084 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0910 18:38:22.683884   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0910 18:38:22.754891   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:38:22.809195   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0910 18:38:22.818744   10084 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0910 18:38:22.861392   10084 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0910 18:38:22.861392   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0910 18:38:23.748517   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0910 18:38:23.766557   10084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0910 18:38:23.798847   10084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:38:23.838980   10084 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0910 18:38:23.878721   10084 ssh_runner.go:195] Run: grep 172.31.223.254	control-plane.minikube.internal$ /etc/hosts
	I0910 18:38:23.887094   10084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:38:23.916579   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:38:24.107719   10084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:38:24.134651   10084 host.go:66] Checking if "ha-301400" exists ...
	I0910 18:38:24.134811   10084 start.go:317] joinCluster: &{Name:ha-301400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-301400 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.216.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.215.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:38:24.135344   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0910 18:38:24.135447   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:38:25.995070   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:38:25.995190   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:25.995257   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:38:28.248471   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:38:28.248471   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:28.249474   10084 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 18:38:28.607517   10084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.4718695s)
	I0910 18:38:28.607677   10084 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.31.215.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 18:38:28.607727   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token iqr4jx.ozrvkw4labd4ob0a --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-301400-m02 --control-plane --apiserver-advertise-address=172.31.215.2 --apiserver-bind-port=8443"
	I0910 18:39:08.896023   10084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token iqr4jx.ozrvkw4labd4ob0a --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-301400-m02 --control-plane --apiserver-advertise-address=172.31.215.2 --apiserver-bind-port=8443": (40.2855412s)
	I0910 18:39:08.896182   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0910 18:39:09.646716   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-301400-m02 minikube.k8s.io/updated_at=2024_09_10T18_39_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=ha-301400 minikube.k8s.io/primary=false
	I0910 18:39:09.812265   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-301400-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0910 18:39:09.960005   10084 start.go:319] duration metric: took 45.8220882s to joinCluster
	I0910 18:39:09.960005   10084 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.31.215.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 18:39:09.960722   10084 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:39:09.963485   10084 out.go:177] * Verifying Kubernetes components...
	I0910 18:39:09.973499   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:39:10.234389   10084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:39:10.263874   10084 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:39:10.264440   10084 kapi.go:59] client config for ha-301400: &rest.Config{Host:"https://172.31.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-301400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-301400\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0910 18:39:10.264619   10084 kubeadm.go:483] Overriding stale ClientConfig host https://172.31.223.254:8443 with https://172.31.216.168:8443
	I0910 18:39:10.265234   10084 node_ready.go:35] waiting up to 6m0s for node "ha-301400-m02" to be "Ready" ...
	I0910 18:39:10.265234   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:10.265234   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:10.265234   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:10.265234   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:10.277642   10084 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0910 18:39:10.775504   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:10.775567   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:10.775567   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:10.775567   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:10.785041   10084 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0910 18:39:11.280994   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:11.281062   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:11.281129   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:11.281129   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:11.284891   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:39:11.771839   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:11.771839   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:11.771839   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:11.771839   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:11.776411   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:12.277262   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:12.277262   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:12.277262   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:12.277262   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:12.283390   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:39:12.284646   10084 node_ready.go:53] node "ha-301400-m02" has status "Ready":"False"
	I0910 18:39:12.771043   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:12.771043   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:12.771124   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:12.771124   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:12.779329   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:39:13.280294   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:13.280330   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:13.280380   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:13.280380   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:13.283555   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:39:13.771254   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:13.771290   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:13.771290   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:13.771290   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:13.774472   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:39:14.281062   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:14.281062   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:14.281062   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:14.281062   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:14.288056   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:39:14.288588   10084 node_ready.go:53] node "ha-301400-m02" has status "Ready":"False"
	I0910 18:39:14.772522   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:14.772522   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:14.772522   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:14.772522   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:14.777458   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:15.278726   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:15.278726   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:15.278726   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:15.278726   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:15.283820   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:15.782324   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:15.782324   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:15.782324   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:15.782324   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:15.828289   10084 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I0910 18:39:16.268974   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:16.268974   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:16.268974   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:16.268974   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:16.273940   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:16.774282   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:16.774466   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:16.774466   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:16.774466   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:16.779348   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:16.780848   10084 node_ready.go:53] node "ha-301400-m02" has status "Ready":"False"
	I0910 18:39:17.268958   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:17.269026   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:17.269026   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:17.269026   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:17.273085   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:17.771930   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:17.771930   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:17.771930   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:17.771930   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:17.776674   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:18.273179   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:18.273268   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:18.273268   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:18.273268   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:18.279766   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:18.773354   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:18.773703   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:18.773703   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:18.773848   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:18.781873   10084 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 18:39:18.782820   10084 node_ready.go:53] node "ha-301400-m02" has status "Ready":"False"
	I0910 18:39:19.271355   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:19.271355   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:19.271355   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:19.271355   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:19.276344   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:19.768270   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:19.768348   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:19.768348   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:19.768348   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:19.772659   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:20.269808   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:20.269808   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:20.269808   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:20.269808   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:20.275358   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:20.769026   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:20.769099   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:20.769099   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:20.769099   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:20.776810   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:39:21.270319   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:21.270391   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:21.270391   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:21.270391   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:21.275506   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:21.276707   10084 node_ready.go:53] node "ha-301400-m02" has status "Ready":"False"
	I0910 18:39:21.772916   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:21.772916   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:21.772916   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:21.773015   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:21.780531   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:39:22.272215   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:22.272312   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:22.272312   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:22.272312   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:22.276863   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:22.770480   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:22.770480   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:22.770480   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:22.770480   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:22.776810   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:39:23.268019   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:23.268207   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:23.268207   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:23.268207   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:23.273404   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:23.770659   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:23.770659   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:23.770659   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:23.770659   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:23.777073   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:39:23.778048   10084 node_ready.go:53] node "ha-301400-m02" has status "Ready":"False"
	I0910 18:39:24.270822   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:24.270822   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:24.270822   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:24.270896   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:24.279724   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:39:24.766978   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:24.767050   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:24.767050   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:24.767123   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:24.772293   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:25.268502   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:25.268502   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:25.268502   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:25.268502   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:25.273760   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:25.772103   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:25.772205   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:25.772289   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:25.772289   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:25.777205   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:26.271285   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:26.271285   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:26.271285   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:26.271285   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:26.274042   10084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:39:26.274797   10084 node_ready.go:53] node "ha-301400-m02" has status "Ready":"False"
	I0910 18:39:26.774154   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:26.774154   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:26.774497   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:26.774497   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:26.779704   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:27.272222   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:27.272296   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:27.272296   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:27.272296   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:27.276692   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:27.772561   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:27.772648   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:27.772648   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:27.772648   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:27.778203   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:28.272355   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:28.272421   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:28.272421   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:28.272499   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:28.280123   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:39:28.281111   10084 node_ready.go:53] node "ha-301400-m02" has status "Ready":"False"
	I0910 18:39:28.769264   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:28.769600   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:28.769654   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:28.769654   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:28.779633   10084 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0910 18:39:29.273383   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:29.273383   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.273494   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.273494   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.277967   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:29.279728   10084 node_ready.go:49] node "ha-301400-m02" has status "Ready":"True"
	I0910 18:39:29.279968   10084 node_ready.go:38] duration metric: took 19.0134462s for node "ha-301400-m02" to be "Ready" ...
	I0910 18:39:29.279968   10084 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:39:29.280258   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
	I0910 18:39:29.280258   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.280370   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.280370   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.286411   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:39:29.299330   10084 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fsbwc" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.300039   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-fsbwc
	I0910 18:39:29.300039   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.300039   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.300039   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.304928   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:29.305770   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:29.305840   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.305840   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.305840   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.309002   10084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:39:29.309272   10084 pod_ready.go:93] pod "coredns-6f6b679f8f-fsbwc" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:29.309272   10084 pod_ready.go:82] duration metric: took 9.9413ms for pod "coredns-6f6b679f8f-fsbwc" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.309814   10084 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ntqxc" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.309946   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ntqxc
	I0910 18:39:29.309946   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.309946   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.309946   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.315353   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:29.316480   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:29.316480   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.316480   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.316480   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.320086   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:39:29.321340   10084 pod_ready.go:93] pod "coredns-6f6b679f8f-ntqxc" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:29.321340   10084 pod_ready.go:82] duration metric: took 11.5253ms for pod "coredns-6f6b679f8f-ntqxc" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.321340   10084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.321440   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400
	I0910 18:39:29.321440   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.321440   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.321440   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.324709   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:39:29.325200   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:29.325736   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.325807   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.325807   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.333701   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:39:29.333701   10084 pod_ready.go:93] pod "etcd-ha-301400" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:29.333701   10084 pod_ready.go:82] duration metric: took 12.3599ms for pod "etcd-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.333701   10084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.334669   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 18:39:29.334669   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.334669   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.334669   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.336922   10084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:39:29.338470   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:29.338524   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.338524   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.338578   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.341488   10084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:39:29.342568   10084 pod_ready.go:93] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:29.342568   10084 pod_ready.go:82] duration metric: took 8.8666ms for pod "etcd-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.342568   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.476159   10084 request.go:632] Waited for 133.0758ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-301400
	I0910 18:39:29.476159   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-301400
	I0910 18:39:29.476159   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.476159   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.476159   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.480967   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:29.679280   10084 request.go:632] Waited for 197.5253ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:29.679280   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:29.679670   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.679670   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.679670   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.686112   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:39:29.687082   10084 pod_ready.go:93] pod "kube-apiserver-ha-301400" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:29.687082   10084 pod_ready.go:82] duration metric: took 344.4898ms for pod "kube-apiserver-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.687082   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.882250   10084 request.go:632] Waited for 195.1554ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-301400-m02
	I0910 18:39:29.882250   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-301400-m02
	I0910 18:39:29.882656   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.882786   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.882820   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.887125   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:30.086242   10084 request.go:632] Waited for 197.7879ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:30.086242   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:30.086242   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:30.086242   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:30.086242   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:30.090255   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:30.090667   10084 pod_ready.go:93] pod "kube-apiserver-ha-301400-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:30.090667   10084 pod_ready.go:82] duration metric: took 403.5579ms for pod "kube-apiserver-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:30.090667   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:30.288308   10084 request.go:632] Waited for 197.6277ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-301400
	I0910 18:39:30.288792   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-301400
	I0910 18:39:30.288792   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:30.288792   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:30.288862   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:30.294840   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:30.474574   10084 request.go:632] Waited for 178.3027ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:30.474836   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:30.474836   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:30.474836   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:30.474836   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:30.479202   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:30.480685   10084 pod_ready.go:93] pod "kube-controller-manager-ha-301400" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:30.480800   10084 pod_ready.go:82] duration metric: took 390.1067ms for pod "kube-controller-manager-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:30.480800   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:30.678212   10084 request.go:632] Waited for 197.2728ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-301400-m02
	I0910 18:39:30.678540   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-301400-m02
	I0910 18:39:30.678659   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:30.678659   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:30.678744   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:30.682823   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:30.882228   10084 request.go:632] Waited for 197.974ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:30.882228   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:30.882228   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:30.882228   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:30.882573   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:30.886618   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:30.887416   10084 pod_ready.go:93] pod "kube-controller-manager-ha-301400-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:30.887481   10084 pod_ready.go:82] duration metric: took 406.6537ms for pod "kube-controller-manager-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:30.887494   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hqkvv" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:31.084201   10084 request.go:632] Waited for 196.4573ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqkvv
	I0910 18:39:31.084521   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqkvv
	I0910 18:39:31.084521   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:31.084607   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:31.084607   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:31.092699   10084 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 18:39:31.286276   10084 request.go:632] Waited for 192.3399ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:31.286276   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:31.286276   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:31.286276   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:31.286276   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:31.290834   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:31.292001   10084 pod_ready.go:93] pod "kube-proxy-hqkvv" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:31.292001   10084 pod_ready.go:82] duration metric: took 404.48ms for pod "kube-proxy-hqkvv" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:31.292001   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sh5jk" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:31.474356   10084 request.go:632] Waited for 181.7389ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sh5jk
	I0910 18:39:31.474564   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sh5jk
	I0910 18:39:31.474564   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:31.474657   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:31.474657   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:31.482148   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:39:31.676409   10084 request.go:632] Waited for 193.0379ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:31.676665   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:31.676665   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:31.676665   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:31.676665   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:31.681165   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:31.682091   10084 pod_ready.go:93] pod "kube-proxy-sh5jk" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:31.682091   10084 pod_ready.go:82] duration metric: took 390.0631ms for pod "kube-proxy-sh5jk" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:31.682184   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:31.882147   10084 request.go:632] Waited for 199.9492ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-301400
	I0910 18:39:31.882480   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-301400
	I0910 18:39:31.882480   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:31.882480   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:31.882480   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:31.889949   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:39:32.087052   10084 request.go:632] Waited for 196.2544ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:32.087419   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:32.087419   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:32.087419   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:32.087419   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:32.093129   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:32.094360   10084 pod_ready.go:93] pod "kube-scheduler-ha-301400" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:32.094360   10084 pod_ready.go:82] duration metric: took 412.1483ms for pod "kube-scheduler-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:32.094360   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:32.274928   10084 request.go:632] Waited for 179.8087ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-301400-m02
	I0910 18:39:32.274928   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-301400-m02
	I0910 18:39:32.274928   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:32.274928   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:32.275078   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:32.280118   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:32.478906   10084 request.go:632] Waited for 198.2039ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:32.479148   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:32.479148   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:32.479148   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:32.479148   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:32.483534   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:32.484701   10084 pod_ready.go:93] pod "kube-scheduler-ha-301400-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:32.484701   10084 pod_ready.go:82] duration metric: took 390.3144ms for pod "kube-scheduler-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:32.484701   10084 pod_ready.go:39] duration metric: took 3.2045162s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:39:32.484701   10084 api_server.go:52] waiting for apiserver process to appear ...
	I0910 18:39:32.493364   10084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:39:32.515794   10084 api_server.go:72] duration metric: took 22.5542622s to wait for apiserver process to appear ...
	I0910 18:39:32.515870   10084 api_server.go:88] waiting for apiserver healthz status ...
	I0910 18:39:32.515954   10084 api_server.go:253] Checking apiserver healthz at https://172.31.216.168:8443/healthz ...
	I0910 18:39:32.526378   10084 api_server.go:279] https://172.31.216.168:8443/healthz returned 200:
	ok
	I0910 18:39:32.527159   10084 round_trippers.go:463] GET https://172.31.216.168:8443/version
	I0910 18:39:32.527223   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:32.527251   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:32.527251   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:32.528469   10084 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 18:39:32.528797   10084 api_server.go:141] control plane version: v1.31.0
	I0910 18:39:32.528820   10084 api_server.go:131] duration metric: took 12.9491ms to wait for apiserver health ...
	I0910 18:39:32.528820   10084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 18:39:32.682761   10084 request.go:632] Waited for 153.9297ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
	I0910 18:39:32.682761   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
	I0910 18:39:32.682761   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:32.682761   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:32.683067   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:32.693720   10084 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0910 18:39:32.701035   10084 system_pods.go:59] 17 kube-system pods found
	I0910 18:39:32.701105   10084 system_pods.go:61] "coredns-6f6b679f8f-fsbwc" [d8424acd-9e8d-42f0-bb05-3c81f13ce95f] Running
	I0910 18:39:32.701105   10084 system_pods.go:61] "coredns-6f6b679f8f-ntqxc" [255790ad-d8ab-404b-8a56-d4b0d4749c5f] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "etcd-ha-301400" [53ecd514-99fb-4e66-990a-1aae70a58578] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "etcd-ha-301400-m02" [53e8cab6-7a5d-481e-8147-034a63ded164] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kindnet-7zqv2" [5b0541a3-83c5-4e1d-8349-dd260cb795ed] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kindnet-jv4nt" [17e701eb-dfb9-4f30-8cd6-817e85ccef65] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kube-apiserver-ha-301400" [8616d1fb-0711-40db-bef3-3f3eb9caf056] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kube-apiserver-ha-301400-m02" [de432e1e-2312-4812-bbee-89731ea35356] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kube-controller-manager-ha-301400" [f59e11f0-96f0-4cb8-878f-9e5076f0a5f5] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kube-controller-manager-ha-301400-m02" [b48f9144-ac88-4139-a35f-0f7d0690821f] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kube-proxy-hqkvv" [f278de26-1d40-4bff-8681-779d4641ac03] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kube-proxy-sh5jk" [18240255-e421-4430-b4bb-49ff7b5430d8] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kube-scheduler-ha-301400" [08507d01-54f8-4cbe-84bb-aa4e06c16520] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kube-scheduler-ha-301400-m02" [3beb7194-4252-49ed-8476-f624db9af5fa] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kube-vip-ha-301400" [fe68ce21-9a06-4d4a-9dc1-d36e76a54ef1] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kube-vip-ha-301400-m02" [b23f424b-46c2-4f19-9175-d061bb1966e5] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "storage-provisioner" [9d7be235-eee4-4f13-bc1c-7777806e77f2] Running
	I0910 18:39:32.701134   10084 system_pods.go:74] duration metric: took 172.3021ms to wait for pod list to return data ...
	I0910 18:39:32.701134   10084 default_sa.go:34] waiting for default service account to be created ...
	I0910 18:39:32.884795   10084 request.go:632] Waited for 183.3498ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/default/serviceaccounts
	I0910 18:39:32.885169   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/default/serviceaccounts
	I0910 18:39:32.885169   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:32.885169   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:32.885169   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:32.890535   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:32.891098   10084 default_sa.go:45] found service account: "default"
	I0910 18:39:32.891227   10084 default_sa.go:55] duration metric: took 190.08ms for default service account to be created ...
	I0910 18:39:32.891227   10084 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 18:39:33.085149   10084 request.go:632] Waited for 193.9086ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
	I0910 18:39:33.085436   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
	I0910 18:39:33.085436   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:33.085436   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:33.085436   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:33.092435   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:39:33.101623   10084 system_pods.go:86] 17 kube-system pods found
	I0910 18:39:33.101623   10084 system_pods.go:89] "coredns-6f6b679f8f-fsbwc" [d8424acd-9e8d-42f0-bb05-3c81f13ce95f] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "coredns-6f6b679f8f-ntqxc" [255790ad-d8ab-404b-8a56-d4b0d4749c5f] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "etcd-ha-301400" [53ecd514-99fb-4e66-990a-1aae70a58578] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "etcd-ha-301400-m02" [53e8cab6-7a5d-481e-8147-034a63ded164] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kindnet-7zqv2" [5b0541a3-83c5-4e1d-8349-dd260cb795ed] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kindnet-jv4nt" [17e701eb-dfb9-4f30-8cd6-817e85ccef65] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kube-apiserver-ha-301400" [8616d1fb-0711-40db-bef3-3f3eb9caf056] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kube-apiserver-ha-301400-m02" [de432e1e-2312-4812-bbee-89731ea35356] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kube-controller-manager-ha-301400" [f59e11f0-96f0-4cb8-878f-9e5076f0a5f5] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kube-controller-manager-ha-301400-m02" [b48f9144-ac88-4139-a35f-0f7d0690821f] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kube-proxy-hqkvv" [f278de26-1d40-4bff-8681-779d4641ac03] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kube-proxy-sh5jk" [18240255-e421-4430-b4bb-49ff7b5430d8] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kube-scheduler-ha-301400" [08507d01-54f8-4cbe-84bb-aa4e06c16520] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kube-scheduler-ha-301400-m02" [3beb7194-4252-49ed-8476-f624db9af5fa] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kube-vip-ha-301400" [fe68ce21-9a06-4d4a-9dc1-d36e76a54ef1] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kube-vip-ha-301400-m02" [b23f424b-46c2-4f19-9175-d061bb1966e5] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "storage-provisioner" [9d7be235-eee4-4f13-bc1c-7777806e77f2] Running
	I0910 18:39:33.101623   10084 system_pods.go:126] duration metric: took 210.3817ms to wait for k8s-apps to be running ...
	I0910 18:39:33.101623   10084 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 18:39:33.109254   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:39:33.131944   10084 system_svc.go:56] duration metric: took 30.3186ms WaitForService to wait for kubelet
	I0910 18:39:33.132490   10084 kubeadm.go:582] duration metric: took 23.1709166s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:39:33.132490   10084 node_conditions.go:102] verifying NodePressure condition ...
	I0910 18:39:33.274656   10084 request.go:632] Waited for 142.0381ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes
	I0910 18:39:33.274656   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes
	I0910 18:39:33.274656   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:33.274656   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:33.274656   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:33.279306   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:33.281541   10084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:39:33.281665   10084 node_conditions.go:123] node cpu capacity is 2
	I0910 18:39:33.281665   10084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:39:33.281665   10084 node_conditions.go:123] node cpu capacity is 2
	I0910 18:39:33.281665   10084 node_conditions.go:105] duration metric: took 149.1645ms to run NodePressure ...
	I0910 18:39:33.281665   10084 start.go:241] waiting for startup goroutines ...
	I0910 18:39:33.281665   10084 start.go:255] writing updated cluster config ...
	I0910 18:39:33.285594   10084 out.go:201] 
	I0910 18:39:33.304473   10084 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:39:33.305075   10084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json ...
	I0910 18:39:33.313474   10084 out.go:177] * Starting "ha-301400-m03" control-plane node in "ha-301400" cluster
	I0910 18:39:33.315943   10084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 18:39:33.315943   10084 cache.go:56] Caching tarball of preloaded images
	I0910 18:39:33.316551   10084 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0910 18:39:33.316551   10084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 18:39:33.316551   10084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json ...
	I0910 18:39:33.324007   10084 start.go:360] acquireMachinesLock for ha-301400-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:39:33.324563   10084 start.go:364] duration metric: took 555.7µs to acquireMachinesLock for "ha-301400-m03"
	I0910 18:39:33.324563   10084 start.go:93] Provisioning new machine with config: &{Name:ha-301400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.0 ClusterName:ha-301400 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.216.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.215.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 18:39:33.324563   10084 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0910 18:39:33.328477   10084 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 18:39:33.328477   10084 start.go:159] libmachine.API.Create for "ha-301400" (driver="hyperv")
	I0910 18:39:33.328477   10084 client.go:168] LocalClient.Create starting
	I0910 18:39:33.329068   10084 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0910 18:39:33.329068   10084 main.go:141] libmachine: Decoding PEM data...
	I0910 18:39:33.329068   10084 main.go:141] libmachine: Parsing certificate...
	I0910 18:39:33.329068   10084 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0910 18:39:33.329068   10084 main.go:141] libmachine: Decoding PEM data...
	I0910 18:39:33.329604   10084 main.go:141] libmachine: Parsing certificate...
	I0910 18:39:33.329677   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0910 18:39:35.052237   10084 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0910 18:39:35.052237   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:35.052312   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0910 18:39:36.647086   10084 main.go:141] libmachine: [stdout =====>] : False
	
	I0910 18:39:36.647086   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:36.647086   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0910 18:39:37.981223   10084 main.go:141] libmachine: [stdout =====>] : True
	
	I0910 18:39:37.981223   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:37.981298   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0910 18:39:41.215759   10084 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0910 18:39:41.215759   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:41.217373   10084 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 18:39:41.550301   10084 main.go:141] libmachine: Creating SSH key...
	I0910 18:39:41.880191   10084 main.go:141] libmachine: Creating VM...
	I0910 18:39:41.880191   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0910 18:39:44.437109   10084 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0910 18:39:44.437934   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:44.438090   10084 main.go:141] libmachine: Using switch "Default Switch"
	I0910 18:39:44.438151   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0910 18:39:46.048089   10084 main.go:141] libmachine: [stdout =====>] : True
	
	I0910 18:39:46.048954   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:46.048954   10084 main.go:141] libmachine: Creating VHD
	I0910 18:39:46.049029   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0910 18:39:49.415060   10084 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 523E1A95-D394-4013-A855-FDAC20B554DD
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0910 18:39:49.415060   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:49.415060   10084 main.go:141] libmachine: Writing magic tar header
	I0910 18:39:49.415060   10084 main.go:141] libmachine: Writing SSH key tar header
	I0910 18:39:49.426432   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0910 18:39:52.371873   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:39:52.371873   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:52.372607   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\disk.vhd' -SizeBytes 20000MB
	I0910 18:39:54.701739   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:39:54.701898   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:54.701898   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-301400-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0910 18:39:57.939419   10084 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-301400-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0910 18:39:57.939419   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:57.939419   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-301400-m03 -DynamicMemoryEnabled $false
	I0910 18:39:59.960646   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:39:59.961214   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:59.961214   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-301400-m03 -Count 2
	I0910 18:40:01.942861   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:40:01.942861   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:01.943516   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-301400-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\boot2docker.iso'
	I0910 18:40:04.274717   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:40:04.274949   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:04.274949   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-301400-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\disk.vhd'
	I0910 18:40:06.665497   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:40:06.665566   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:06.665566   10084 main.go:141] libmachine: Starting VM...
	I0910 18:40:06.665566   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-301400-m03
	I0910 18:40:09.503182   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:40:09.503182   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:09.503182   10084 main.go:141] libmachine: Waiting for host to start...
	I0910 18:40:09.503182   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:11.518957   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:11.519896   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:11.519968   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:40:13.738591   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:40:13.738591   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:14.750940   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:16.715340   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:16.716227   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:16.716305   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:40:18.975398   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:40:18.976174   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:19.989830   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:21.979637   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:21.979637   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:21.979637   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:40:24.250594   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:40:24.250594   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:25.266288   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:27.238262   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:27.238262   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:27.238262   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:40:29.457264   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:40:29.457264   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:30.458560   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:32.453380   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:32.454104   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:32.454104   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:40:34.854253   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:40:34.854447   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:34.854514   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:36.812352   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:36.812405   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:36.812405   10084 machine.go:93] provisionDockerMachine start ...
	I0910 18:40:36.812405   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:38.743323   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:38.744202   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:38.744202   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:40:41.031895   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:40:41.031944   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:41.035393   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:40:41.049326   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.217.146 22 <nil> <nil>}
	I0910 18:40:41.049388   10084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:40:41.171928   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:40:41.171928   10084 buildroot.go:166] provisioning hostname "ha-301400-m03"
	I0910 18:40:41.171928   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:43.060959   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:43.060959   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:43.060959   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:40:45.291389   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:40:45.291389   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:45.295379   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:40:45.296010   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.217.146 22 <nil> <nil>}
	I0910 18:40:45.296010   10084 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-301400-m03 && echo "ha-301400-m03" | sudo tee /etc/hostname
	I0910 18:40:45.441242   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-301400-m03
	
	I0910 18:40:45.441319   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:47.322336   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:47.322336   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:47.322336   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:40:49.614787   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:40:49.614787   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:49.619971   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:40:49.619971   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.217.146 22 <nil> <nil>}
	I0910 18:40:49.620496   10084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-301400-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-301400-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-301400-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:40:49.757536   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:40:49.757536   10084 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0910 18:40:49.757536   10084 buildroot.go:174] setting up certificates
	I0910 18:40:49.757536   10084 provision.go:84] configureAuth start
	I0910 18:40:49.757536   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:51.691806   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:51.691935   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:51.692012   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:40:53.997803   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:40:53.997803   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:53.998086   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:55.956618   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:55.956618   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:55.956732   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:40:58.225854   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:40:58.226699   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:58.226699   10084 provision.go:143] copyHostCerts
	I0910 18:40:58.226829   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0910 18:40:58.227173   10084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0910 18:40:58.227173   10084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0910 18:40:58.227173   10084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0910 18:40:58.228629   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0910 18:40:58.228629   10084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0910 18:40:58.229167   10084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0910 18:40:58.229448   10084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0910 18:40:58.230495   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0910 18:40:58.230495   10084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0910 18:40:58.230495   10084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0910 18:40:58.231192   10084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0910 18:40:58.231926   10084 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-301400-m03 san=[127.0.0.1 172.31.217.146 ha-301400-m03 localhost minikube]
	I0910 18:40:58.469524   10084 provision.go:177] copyRemoteCerts
	I0910 18:40:58.477778   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:40:58.477778   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:00.357497   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:00.357581   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:00.357698   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:02.640198   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:02.640198   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:02.640519   10084 sshutil.go:53] new ssh client: &{IP:172.31.217.146 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\id_rsa Username:docker}
	I0910 18:41:02.746080   10084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2680139s)
	I0910 18:41:02.746080   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0910 18:41:02.746407   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 18:41:02.791824   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0910 18:41:02.792115   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 18:41:02.838513   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0910 18:41:02.838513   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0910 18:41:02.881135   10084 provision.go:87] duration metric: took 13.1227128s to configureAuth
	I0910 18:41:02.881180   10084 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:41:02.881211   10084 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:41:02.881747   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:04.820005   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:04.820680   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:04.820680   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:07.159900   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:07.159900   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:07.165491   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:41:07.165643   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.217.146 22 <nil> <nil>}
	I0910 18:41:07.165643   10084 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 18:41:07.292663   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 18:41:07.292663   10084 buildroot.go:70] root file system type: tmpfs
	I0910 18:41:07.292663   10084 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 18:41:07.292663   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:09.201150   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:09.201150   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:09.201560   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:11.503247   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:11.503247   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:11.507856   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:41:11.508276   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.217.146 22 <nil> <nil>}
	I0910 18:41:11.508276   10084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.31.216.168"
	Environment="NO_PROXY=172.31.216.168,172.31.215.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 18:41:11.661740   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.31.216.168
	Environment=NO_PROXY=172.31.216.168,172.31.215.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 18:41:11.662264   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:13.583429   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:13.583429   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:13.584568   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:15.898826   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:15.898826   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:15.902553   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:41:15.902795   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.217.146 22 <nil> <nil>}
	I0910 18:41:15.902795   10084 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 18:41:18.095377   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 18:41:18.095377   10084 machine.go:96] duration metric: took 41.2801836s to provisionDockerMachine
	I0910 18:41:18.095377   10084 client.go:171] duration metric: took 1m44.7598153s to LocalClient.Create
	I0910 18:41:18.095377   10084 start.go:167] duration metric: took 1m44.7598153s to libmachine.API.Create "ha-301400"
	I0910 18:41:18.095377   10084 start.go:293] postStartSetup for "ha-301400-m03" (driver="hyperv")
	I0910 18:41:18.095377   10084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:41:18.105666   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:41:18.105666   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:20.015100   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:20.015100   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:20.015985   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:22.335910   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:22.336618   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:22.336668   10084 sshutil.go:53] new ssh client: &{IP:172.31.217.146 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\id_rsa Username:docker}
	I0910 18:41:22.434464   10084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3285051s)
	I0910 18:41:22.443469   10084 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:41:22.450856   10084 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:41:22.450856   10084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0910 18:41:22.450856   10084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0910 18:41:22.451504   10084 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> 47242.pem in /etc/ssl/certs
	I0910 18:41:22.451504   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /etc/ssl/certs/47242.pem
	I0910 18:41:22.461507   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:41:22.479988   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /etc/ssl/certs/47242.pem (1708 bytes)
	I0910 18:41:22.522918   10084 start.go:296] duration metric: took 4.4272421s for postStartSetup
	I0910 18:41:22.525121   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:24.423763   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:24.424572   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:24.424654   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:26.732650   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:26.732650   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:26.733647   10084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json ...
	I0910 18:41:26.735438   10084 start.go:128] duration metric: took 1m53.4032081s to createHost
	I0910 18:41:26.735623   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:28.662368   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:28.663230   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:28.663320   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:30.987273   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:30.987273   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:30.991376   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:41:30.991759   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.217.146 22 <nil> <nil>}
	I0910 18:41:30.991825   10084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:41:31.125190   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725993691.342270655
	
	I0910 18:41:31.125266   10084 fix.go:216] guest clock: 1725993691.342270655
	I0910 18:41:31.125266   10084 fix.go:229] Guest: 2024-09-10 18:41:31.342270655 +0000 UTC Remote: 2024-09-10 18:41:26.7355524 +0000 UTC m=+508.548126501 (delta=4.606718255s)
	I0910 18:41:31.125266   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:33.026036   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:33.026960   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:33.026960   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:35.304095   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:35.304095   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:35.308690   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:41:35.309069   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.217.146 22 <nil> <nil>}
	I0910 18:41:35.309171   10084 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1725993691
	I0910 18:41:35.446904   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Sep 10 18:41:31 UTC 2024
	
	I0910 18:41:35.446904   10084 fix.go:236] clock set: Tue Sep 10 18:41:31 UTC 2024
	 (err=<nil>)
	I0910 18:41:35.446904   10084 start.go:83] releasing machines lock for "ha-301400-m03", held for 2m2.114086s
	I0910 18:41:35.447527   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:37.381658   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:37.381658   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:37.381727   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:39.674835   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:39.675787   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:39.678119   10084 out.go:177] * Found network options:
	I0910 18:41:39.680867   10084 out.go:177]   - NO_PROXY=172.31.216.168,172.31.215.2
	W0910 18:41:39.682790   10084 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 18:41:39.682790   10084 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 18:41:39.687097   10084 out.go:177]   - NO_PROXY=172.31.216.168,172.31.215.2
	W0910 18:41:39.689258   10084 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 18:41:39.689293   10084 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 18:41:39.690592   10084 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 18:41:39.690662   10084 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 18:41:39.692319   10084 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0910 18:41:39.692319   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:39.698917   10084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0910 18:41:39.699926   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:41.643366   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:41.643366   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:41.643366   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:41.660959   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:41.661049   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:41.661049   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:43.985673   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:43.985854   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:43.986241   10084 sshutil.go:53] new ssh client: &{IP:172.31.217.146 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\id_rsa Username:docker}
	I0910 18:41:44.005360   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:44.005360   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:44.005360   10084 sshutil.go:53] new ssh client: &{IP:172.31.217.146 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\id_rsa Username:docker}
	I0910 18:41:44.081008   10084 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.381796s)
	W0910 18:41:44.081077   10084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:41:44.090033   10084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:41:44.093187   10084 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.4005715s)
	W0910 18:41:44.093187   10084 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0910 18:41:44.124843   10084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:41:44.124843   10084 start.go:495] detecting cgroup driver to use...
	I0910 18:41:44.124843   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:41:44.169828   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 18:41:44.200526   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0910 18:41:44.216363   10084 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0910 18:41:44.216744   10084 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0910 18:41:44.225386   10084 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 18:41:44.233014   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 18:41:44.260076   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 18:41:44.287226   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 18:41:44.313413   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 18:41:44.339402   10084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:41:44.365816   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 18:41:44.393641   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 18:41:44.419167   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 18:41:44.448391   10084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:41:44.478781   10084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:41:44.509113   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:41:44.697178   10084 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 18:41:44.730640   10084 start.go:495] detecting cgroup driver to use...
	I0910 18:41:44.742597   10084 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 18:41:44.777928   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:41:44.807962   10084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:41:44.840792   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:41:44.875486   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 18:41:44.905166   10084 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 18:41:44.964114   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 18:41:44.989184   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:41:45.040806   10084 ssh_runner.go:195] Run: which cri-dockerd
	I0910 18:41:45.054585   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 18:41:45.072888   10084 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 18:41:45.110566   10084 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 18:41:45.296153   10084 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 18:41:45.488512   10084 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 18:41:45.488637   10084 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 18:41:45.540709   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:41:45.737200   10084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 18:41:48.304284   10084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5669113s)
	I0910 18:41:48.312795   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 18:41:48.344320   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 18:41:48.375059   10084 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 18:41:48.563151   10084 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 18:41:48.747239   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:41:48.930629   10084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 18:41:48.966905   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 18:41:48.997827   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:41:49.185037   10084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 18:41:49.285589   10084 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 18:41:49.294834   10084 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 18:41:49.304102   10084 start.go:563] Will wait 60s for crictl version
	I0910 18:41:49.312924   10084 ssh_runner.go:195] Run: which crictl
	I0910 18:41:49.327550   10084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:41:49.375848   10084 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0910 18:41:49.383879   10084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 18:41:49.423090   10084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 18:41:49.459325   10084 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0910 18:41:49.461833   10084 out.go:177]   - env NO_PROXY=172.31.216.168
	I0910 18:41:49.464026   10084 out.go:177]   - env NO_PROXY=172.31.216.168,172.31.215.2
	I0910 18:41:49.466623   10084 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0910 18:41:49.469983   10084 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0910 18:41:49.469983   10084 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0910 18:41:49.469983   10084 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0910 18:41:49.469983   10084 ip.go:211] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:36:e6 Flags:up|broadcast|multicast|running}
	I0910 18:41:49.472745   10084 ip.go:214] interface addr: fe80::bc85:cd58:cadb:2533/64
	I0910 18:41:49.472745   10084 ip.go:214] interface addr: 172.31.208.1/20
	I0910 18:41:49.482904   10084 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0910 18:41:49.489292   10084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:41:49.510490   10084 mustload.go:65] Loading cluster: ha-301400
	I0910 18:41:49.511159   10084 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:41:49.511858   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:41:51.374866   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:51.374866   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:51.375019   10084 host.go:66] Checking if "ha-301400" exists ...
	I0910 18:41:51.375712   10084 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400 for IP: 172.31.217.146
	I0910 18:41:51.375789   10084 certs.go:194] generating shared ca certs ...
	I0910 18:41:51.375789   10084 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:41:51.376751   10084 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0910 18:41:51.376751   10084 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0910 18:41:51.377312   10084 certs.go:256] generating profile certs ...
	I0910 18:41:51.377927   10084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\client.key
	I0910 18:41:51.378023   10084 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.65ff3b51
	I0910 18:41:51.378185   10084 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.65ff3b51 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.216.168 172.31.215.2 172.31.217.146 172.31.223.254]
	I0910 18:41:51.650674   10084 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.65ff3b51 ...
	I0910 18:41:51.650674   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.65ff3b51: {Name:mk7e75359aee205b4655795e8f8d7e03cf42ccc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:41:51.651932   10084 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.65ff3b51 ...
	I0910 18:41:51.651932   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.65ff3b51: {Name:mk031be9627b72bc55c6cc69b16f4cca6f9c43f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:41:51.652932   10084 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.65ff3b51 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt
	I0910 18:41:51.669397   10084 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.65ff3b51 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key
	I0910 18:41:51.670594   10084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key
	I0910 18:41:51.670594   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 18:41:51.671178   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0910 18:41:51.671234   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 18:41:51.671234   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 18:41:51.671234   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 18:41:51.671234   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 18:41:51.672212   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 18:41:51.672212   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 18:41:51.672212   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem (1338 bytes)
	W0910 18:41:51.672957   10084 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724_empty.pem, impossibly tiny 0 bytes
	I0910 18:41:51.672957   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0910 18:41:51.672957   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0910 18:41:51.673686   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0910 18:41:51.673686   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0910 18:41:51.674380   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem (1708 bytes)
	I0910 18:41:51.674380   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:41:51.674380   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem -> /usr/share/ca-certificates/4724.pem
	I0910 18:41:51.674380   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /usr/share/ca-certificates/47242.pem
	I0910 18:41:51.675072   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:41:53.577807   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:53.577807   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:53.577807   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:55.832608   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:41:55.832608   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:55.832772   10084 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 18:41:55.936529   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0910 18:41:55.944410   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0910 18:41:55.973391   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0910 18:41:55.980456   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0910 18:41:56.009680   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0910 18:41:56.017073   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0910 18:41:56.048746   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0910 18:41:56.055716   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0910 18:41:56.082397   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0910 18:41:56.088959   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0910 18:41:56.117919   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0910 18:41:56.124068   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0910 18:41:56.143384   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:41:56.189469   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0910 18:41:56.237486   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:41:56.279607   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 18:41:56.323849   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0910 18:41:56.366998   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 18:41:56.412323   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:41:56.458809   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 18:41:56.505398   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:41:56.553087   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem --> /usr/share/ca-certificates/4724.pem (1338 bytes)
	I0910 18:41:56.599610   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /usr/share/ca-certificates/47242.pem (1708 bytes)
	I0910 18:41:56.646529   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0910 18:41:56.675720   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0910 18:41:56.705034   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0910 18:41:56.733055   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0910 18:41:56.760737   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0910 18:41:56.794394   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0910 18:41:56.824014   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0910 18:41:56.864269   10084 ssh_runner.go:195] Run: openssl version
	I0910 18:41:56.879790   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4724.pem && ln -fs /usr/share/ca-certificates/4724.pem /etc/ssl/certs/4724.pem"
	I0910 18:41:56.906262   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4724.pem
	I0910 18:41:56.912679   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 18:41:56.920684   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4724.pem
	I0910 18:41:56.938518   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4724.pem /etc/ssl/certs/51391683.0"
	I0910 18:41:56.962983   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47242.pem && ln -fs /usr/share/ca-certificates/47242.pem /etc/ssl/certs/47242.pem"
	I0910 18:41:56.990905   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47242.pem
	I0910 18:41:56.996840   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 18:41:57.004842   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47242.pem
	I0910 18:41:57.023411   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47242.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:41:57.054459   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:41:57.085087   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:41:57.092462   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:41:57.100068   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:41:57.116436   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:41:57.143443   10084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:41:57.150789   10084 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 18:41:57.151047   10084 kubeadm.go:934] updating node {m03 172.31.217.146 8443 v1.31.0 docker true true} ...
	I0910 18:41:57.151136   10084 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-301400-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.217.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-301400 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:41:57.151254   10084 kube-vip.go:115] generating kube-vip config ...
	I0910 18:41:57.159238   10084 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0910 18:41:57.183011   10084 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0910 18:41:57.183011   10084 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.31.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0910 18:41:57.195533   10084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:41:57.209857   10084 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0910 18:41:57.217838   10084 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0910 18:41:57.237693   10084 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0910 18:41:57.237693   10084 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0910 18:41:57.237778   10084 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0910 18:41:57.237931   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0910 18:41:57.237931   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0910 18:41:57.249450   10084 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0910 18:41:57.250316   10084 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0910 18:41:57.251315   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:41:57.257908   10084 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0910 18:41:57.257908   10084 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0910 18:41:57.257908   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0910 18:41:57.257908   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0910 18:41:57.297810   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0910 18:41:57.308239   10084 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0910 18:41:57.360202   10084 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0910 18:41:57.360202   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0910 18:41:58.351462   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0910 18:41:58.376501   10084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0910 18:41:58.408281   10084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:41:58.438069   10084 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0910 18:41:58.480485   10084 ssh_runner.go:195] Run: grep 172.31.223.254	control-plane.minikube.internal$ /etc/hosts
	I0910 18:41:58.488791   10084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:41:58.519118   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:41:58.702421   10084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:41:58.729434   10084 host.go:66] Checking if "ha-301400" exists ...
	I0910 18:41:58.730430   10084 start.go:317] joinCluster: &{Name:ha-301400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-301400 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.216.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.215.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.31.217.146 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:41:58.730430   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0910 18:41:58.730430   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:42:00.572708   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:42:00.572963   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:42:00.572963   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:42:02.852297   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:42:02.852941   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:42:02.853325   10084 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 18:42:03.039734   10084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.3090139s)
	I0910 18:42:03.039907   10084 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.31.217.146 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 18:42:03.040082   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token craoai.8re530ootiipv2pi --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-301400-m03 --control-plane --apiserver-advertise-address=172.31.217.146 --apiserver-bind-port=8443"
	I0910 18:42:46.915020   10084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token craoai.8re530ootiipv2pi --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-301400-m03 --control-plane --apiserver-advertise-address=172.31.217.146 --apiserver-bind-port=8443": (43.8719044s)
	I0910 18:42:46.915020   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0910 18:42:47.668610   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-301400-m03 minikube.k8s.io/updated_at=2024_09_10T18_42_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=ha-301400 minikube.k8s.io/primary=false
	I0910 18:42:47.827197   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-301400-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0910 18:42:47.991643   10084 start.go:319] duration metric: took 49.2579005s to joinCluster
	I0910 18:42:47.991792   10084 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.31.217.146 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 18:42:47.992104   10084 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:42:47.995199   10084 out.go:177] * Verifying Kubernetes components...
	I0910 18:42:48.006587   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:42:48.399806   10084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:42:48.433709   10084 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:42:48.434256   10084 kapi.go:59] client config for ha-301400: &rest.Config{Host:"https://172.31.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-301400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-301400\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0910 18:42:48.434441   10084 kubeadm.go:483] Overriding stale ClientConfig host https://172.31.223.254:8443 with https://172.31.216.168:8443
	I0910 18:42:48.434705   10084 node_ready.go:35] waiting up to 6m0s for node "ha-301400-m03" to be "Ready" ...
	I0910 18:42:48.435254   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:48.435254   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:48.435254   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:48.435254   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:48.449998   10084 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0910 18:42:48.938449   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:48.938449   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:48.938449   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:48.938449   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:48.944280   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:42:49.444286   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:49.444327   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:49.444327   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:49.444360   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:49.448064   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:42:49.948818   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:49.949009   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:49.949009   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:49.949009   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:49.952794   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:42:50.442409   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:50.442409   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:50.442409   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:50.442409   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:50.445989   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:42:50.452223   10084 node_ready.go:53] node "ha-301400-m03" has status "Ready":"False"
	I0910 18:42:50.948368   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:50.948368   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:50.948368   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:50.948368   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:50.952456   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:42:51.439228   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:51.439228   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:51.439228   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:51.439228   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:51.443496   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:42:51.948573   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:51.948791   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:51.948873   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:51.948873   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:52.180072   10084 round_trippers.go:574] Response Status: 200 OK in 231 milliseconds
	I0910 18:42:52.441824   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:52.441824   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:52.441824   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:52.441824   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:52.446434   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:42:52.951241   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:52.951307   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:52.951307   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:52.951307   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:52.954925   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:42:52.956172   10084 node_ready.go:53] node "ha-301400-m03" has status "Ready":"False"
	I0910 18:42:53.443069   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:53.443149   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:53.443149   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:53.443149   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:53.495391   10084 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0910 18:42:53.944617   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:53.944617   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:53.944617   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:53.944617   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:53.948046   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:42:54.438216   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:54.438216   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:54.438216   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:54.438216   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:54.442787   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:42:54.938768   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:54.938927   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:54.938927   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:54.938927   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:54.942371   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:42:55.439393   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:55.439393   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:55.439393   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:55.439393   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:55.444182   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:42:55.444251   10084 node_ready.go:53] node "ha-301400-m03" has status "Ready":"False"
	I0910 18:42:55.939343   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:55.939518   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:55.939518   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:55.939518   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:55.946814   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:42:56.437562   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:56.437562   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:56.437562   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:56.437562   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:56.441601   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:42:56.942452   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:56.942452   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:56.942452   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:56.942452   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:56.947148   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:42:57.441570   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:57.441570   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:57.441570   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:57.441570   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:57.447031   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:42:57.447949   10084 node_ready.go:53] node "ha-301400-m03" has status "Ready":"False"
	I0910 18:42:57.943054   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:57.943054   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:57.943148   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:57.943148   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:57.946624   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:42:58.440893   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:58.440958   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:58.440958   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:58.440958   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:58.447361   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:42:58.941963   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:58.942019   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:58.942019   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:58.942019   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:58.951106   10084 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0910 18:42:59.442766   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:59.442766   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:59.443067   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:59.443067   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:59.450321   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:42:59.451353   10084 node_ready.go:53] node "ha-301400-m03" has status "Ready":"False"
	I0910 18:42:59.944842   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:59.944933   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:59.944933   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:59.944933   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:59.949666   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:00.442797   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:00.442797   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:00.442797   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:00.442797   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:00.447269   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:43:00.945354   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:00.945377   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:00.945377   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:00.945377   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:00.949543   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:01.450005   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:01.450072   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:01.450072   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:01.450072   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:01.458499   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:43:01.459192   10084 node_ready.go:53] node "ha-301400-m03" has status "Ready":"False"
	I0910 18:43:01.949858   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:01.949858   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:01.949858   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:01.949858   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:01.957069   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:43:02.449716   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:02.449828   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:02.449828   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:02.449828   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:02.454184   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:02.950123   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:02.950123   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:02.950123   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:02.950123   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:02.958112   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:43:03.449438   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:03.449438   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:03.449930   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:03.449952   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:03.456870   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:43:03.948720   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:03.948965   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:03.948965   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:03.948965   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:03.953441   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:03.953804   10084 node_ready.go:53] node "ha-301400-m03" has status "Ready":"False"
	I0910 18:43:04.447656   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:04.447656   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:04.447656   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:04.447656   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:04.453925   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:43:04.947306   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:04.947496   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:04.947496   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:04.947496   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:04.952204   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:43:05.446997   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:05.446997   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:05.446997   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:05.446997   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:05.451537   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:05.947687   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:05.947785   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:05.947785   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:05.947785   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:05.951284   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:43:06.447072   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:06.447136   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:06.447136   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:06.447136   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:06.454913   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:43:06.456506   10084 node_ready.go:53] node "ha-301400-m03" has status "Ready":"False"
	I0910 18:43:06.949058   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:06.949243   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:06.949243   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:06.949243   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:06.954443   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:07.437684   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:07.437745   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:07.437745   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:07.437745   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:07.445135   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:43:07.938198   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:07.938198   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:07.938284   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:07.938284   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:07.942659   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:08.438093   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:08.438093   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:08.438093   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:08.438093   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:08.443072   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:08.951438   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:08.951438   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:08.951438   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:08.951438   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:08.957833   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:43:08.959846   10084 node_ready.go:53] node "ha-301400-m03" has status "Ready":"False"
	I0910 18:43:09.449353   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:09.449584   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.449584   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.449584   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.455269   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:09.456839   10084 node_ready.go:49] node "ha-301400-m03" has status "Ready":"True"
	I0910 18:43:09.456899   10084 node_ready.go:38] duration metric: took 21.0207817s for node "ha-301400-m03" to be "Ready" ...
	I0910 18:43:09.456899   10084 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:43:09.457022   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
	I0910 18:43:09.457093   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.457093   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.457093   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.470016   10084 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0910 18:43:09.482250   10084 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fsbwc" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:09.482849   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-fsbwc
	I0910 18:43:09.482849   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.482849   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.482849   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.486034   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:43:09.487028   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:09.487028   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.487028   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.487028   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.491309   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:09.492944   10084 pod_ready.go:93] pod "coredns-6f6b679f8f-fsbwc" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:09.492944   10084 pod_ready.go:82] duration metric: took 10.6931ms for pod "coredns-6f6b679f8f-fsbwc" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:09.492944   10084 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ntqxc" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:09.493056   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ntqxc
	I0910 18:43:09.493056   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.493056   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.493056   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.496273   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:43:09.497717   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:09.497717   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.497771   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.497771   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.501233   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:43:09.501731   10084 pod_ready.go:93] pod "coredns-6f6b679f8f-ntqxc" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:09.501731   10084 pod_ready.go:82] duration metric: took 8.7863ms for pod "coredns-6f6b679f8f-ntqxc" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:09.501796   10084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:09.501853   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400
	I0910 18:43:09.501853   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.501853   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.501853   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.505228   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:43:09.506217   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:09.506217   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.506217   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.506217   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.510379   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:09.511220   10084 pod_ready.go:93] pod "etcd-ha-301400" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:09.511303   10084 pod_ready.go:82] duration metric: took 9.5061ms for pod "etcd-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:09.511303   10084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:09.511405   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 18:43:09.511440   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.511440   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.511440   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.515370   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:43:09.516295   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:43:09.516355   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.516355   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.516355   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.518584   10084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:43:09.520000   10084 pod_ready.go:93] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:09.520064   10084 pod_ready.go:82] duration metric: took 8.7604ms for pod "etcd-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:09.520064   10084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-301400-m03" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:09.652630   10084 request.go:632] Waited for 132.2789ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m03
	I0910 18:43:09.652771   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m03
	I0910 18:43:09.652771   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.652771   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.652771   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.658017   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:09.854574   10084 request.go:632] Waited for 195.5722ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:09.854909   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:09.854909   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.854909   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.854909   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.860632   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:09.861959   10084 pod_ready.go:93] pod "etcd-ha-301400-m03" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:09.861959   10084 pod_ready.go:82] duration metric: took 341.8724ms for pod "etcd-ha-301400-m03" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:09.862035   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:10.058405   10084 request.go:632] Waited for 196.2328ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-301400
	I0910 18:43:10.058514   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-301400
	I0910 18:43:10.058514   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:10.058514   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:10.058514   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:10.067805   10084 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0910 18:43:10.262579   10084 request.go:632] Waited for 193.1122ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:10.262830   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:10.262925   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:10.262925   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:10.262925   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:10.268295   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:10.269294   10084 pod_ready.go:93] pod "kube-apiserver-ha-301400" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:10.269384   10084 pod_ready.go:82] duration metric: took 407.2315ms for pod "kube-apiserver-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:10.269384   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:10.452332   10084 request.go:632] Waited for 182.6066ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-301400-m02
	I0910 18:43:10.452503   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-301400-m02
	I0910 18:43:10.452503   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:10.452503   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:10.452503   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:10.458014   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:10.655134   10084 request.go:632] Waited for 196.1131ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:43:10.655134   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:43:10.655134   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:10.655134   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:10.655134   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:10.659516   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:10.660318   10084 pod_ready.go:93] pod "kube-apiserver-ha-301400-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:10.660318   10084 pod_ready.go:82] duration metric: took 390.9078ms for pod "kube-apiserver-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:10.660318   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-301400-m03" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:10.858945   10084 request.go:632] Waited for 198.6133ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-301400-m03
	I0910 18:43:10.859267   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-301400-m03
	I0910 18:43:10.859267   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:10.859267   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:10.859364   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:10.863812   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:11.063735   10084 request.go:632] Waited for 198.0698ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:11.063735   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:11.063735   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:11.063735   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:11.063735   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:11.069545   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:11.070916   10084 pod_ready.go:93] pod "kube-apiserver-ha-301400-m03" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:11.070916   10084 pod_ready.go:82] duration metric: took 410.5702ms for pod "kube-apiserver-ha-301400-m03" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:11.071051   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:11.250697   10084 request.go:632] Waited for 179.5259ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-301400
	I0910 18:43:11.250796   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-301400
	I0910 18:43:11.250979   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:11.250979   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:11.251012   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:11.254579   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:43:11.457179   10084 request.go:632] Waited for 200.7345ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:11.457254   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:11.457254   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:11.457254   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:11.457254   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:11.462013   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:11.462700   10084 pod_ready.go:93] pod "kube-controller-manager-ha-301400" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:11.462700   10084 pod_ready.go:82] duration metric: took 391.6225ms for pod "kube-controller-manager-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:11.462700   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:11.659562   10084 request.go:632] Waited for 196.6255ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-301400-m02
	I0910 18:43:11.659677   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-301400-m02
	I0910 18:43:11.659677   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:11.659677   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:11.659811   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:11.664988   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:11.860576   10084 request.go:632] Waited for 194.7892ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:43:11.861171   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:43:11.861171   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:11.861171   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:11.861236   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:11.865930   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:11.866905   10084 pod_ready.go:93] pod "kube-controller-manager-ha-301400-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:11.866905   10084 pod_ready.go:82] duration metric: took 404.178ms for pod "kube-controller-manager-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:11.866905   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-301400-m03" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:12.064099   10084 request.go:632] Waited for 196.9392ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-301400-m03
	I0910 18:43:12.064304   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-301400-m03
	I0910 18:43:12.064304   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:12.064304   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:12.064304   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:12.068710   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:12.251852   10084 request.go:632] Waited for 182.3669ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:12.251852   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:12.252273   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:12.252273   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:12.252273   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:12.258181   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:12.258940   10084 pod_ready.go:93] pod "kube-controller-manager-ha-301400-m03" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:12.258940   10084 pod_ready.go:82] duration metric: took 392.009ms for pod "kube-controller-manager-ha-301400-m03" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:12.258940   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hqkvv" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:12.456116   10084 request.go:632] Waited for 196.9174ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqkvv
	I0910 18:43:12.456310   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqkvv
	I0910 18:43:12.456440   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:12.456440   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:12.456440   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:12.460764   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:12.658377   10084 request.go:632] Waited for 196.2862ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:43:12.658614   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:43:12.658614   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:12.658614   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:12.658614   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:12.662652   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:12.663405   10084 pod_ready.go:93] pod "kube-proxy-hqkvv" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:12.663476   10084 pod_ready.go:82] duration metric: took 404.5083ms for pod "kube-proxy-hqkvv" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:12.663476   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jczrq" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:12.863543   10084 request.go:632] Waited for 199.9413ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jczrq
	I0910 18:43:12.863986   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jczrq
	I0910 18:43:12.863986   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:12.863986   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:12.863986   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:12.869686   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:13.052474   10084 request.go:632] Waited for 181.7834ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:13.052771   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:13.052771   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:13.052771   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:13.052852   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:13.057624   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:13.058940   10084 pod_ready.go:93] pod "kube-proxy-jczrq" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:13.058940   10084 pod_ready.go:82] duration metric: took 395.4371ms for pod "kube-proxy-jczrq" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:13.058940   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sh5jk" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:13.255273   10084 request.go:632] Waited for 196.3202ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sh5jk
	I0910 18:43:13.255273   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sh5jk
	I0910 18:43:13.255273   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:13.255273   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:13.255273   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:13.260456   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:13.456319   10084 request.go:632] Waited for 194.1993ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:13.456610   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:13.456610   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:13.456610   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:13.456610   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:13.473221   10084 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0910 18:43:13.474407   10084 pod_ready.go:93] pod "kube-proxy-sh5jk" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:13.474407   10084 pod_ready.go:82] duration metric: took 415.4401ms for pod "kube-proxy-sh5jk" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:13.474407   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:13.660305   10084 request.go:632] Waited for 185.558ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-301400
	I0910 18:43:13.660684   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-301400
	I0910 18:43:13.660684   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:13.660760   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:13.660760   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:13.667209   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:13.865577   10084 request.go:632] Waited for 197.3168ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:13.865788   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:13.865788   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:13.865788   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:13.865892   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:13.870632   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:13.872464   10084 pod_ready.go:93] pod "kube-scheduler-ha-301400" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:13.872464   10084 pod_ready.go:82] duration metric: took 398.0301ms for pod "kube-scheduler-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:13.872537   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:14.051329   10084 request.go:632] Waited for 178.3777ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-301400-m02
	I0910 18:43:14.051440   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-301400-m02
	I0910 18:43:14.051440   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:14.051440   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:14.051440   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:14.057197   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:14.253749   10084 request.go:632] Waited for 194.4435ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:43:14.253749   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:43:14.253856   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:14.253856   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:14.253856   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:14.261458   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:43:14.262398   10084 pod_ready.go:93] pod "kube-scheduler-ha-301400-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:14.262398   10084 pod_ready.go:82] duration metric: took 389.8017ms for pod "kube-scheduler-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:14.262398   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-301400-m03" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:14.457047   10084 request.go:632] Waited for 194.3858ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-301400-m03
	I0910 18:43:14.457268   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-301400-m03
	I0910 18:43:14.457387   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:14.457387   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:14.457525   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:14.462393   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:14.659961   10084 request.go:632] Waited for 196.1271ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:14.660062   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:14.660183   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:14.660239   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:14.660239   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:14.664528   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:14.665449   10084 pod_ready.go:93] pod "kube-scheduler-ha-301400-m03" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:14.665449   10084 pod_ready.go:82] duration metric: took 403.0243ms for pod "kube-scheduler-ha-301400-m03" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:14.665449   10084 pod_ready.go:39] duration metric: took 5.2081407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:43:14.665449   10084 api_server.go:52] waiting for apiserver process to appear ...
	I0910 18:43:14.674018   10084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:43:14.700736   10084 api_server.go:72] duration metric: took 26.7070924s to wait for apiserver process to appear ...
	I0910 18:43:14.700736   10084 api_server.go:88] waiting for apiserver healthz status ...
	I0910 18:43:14.700736   10084 api_server.go:253] Checking apiserver healthz at https://172.31.216.168:8443/healthz ...
	I0910 18:43:14.710550   10084 api_server.go:279] https://172.31.216.168:8443/healthz returned 200:
	ok
	I0910 18:43:14.710872   10084 round_trippers.go:463] GET https://172.31.216.168:8443/version
	I0910 18:43:14.710872   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:14.710930   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:14.710930   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:14.711931   10084 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 18:43:14.711931   10084 api_server.go:141] control plane version: v1.31.0
	I0910 18:43:14.711931   10084 api_server.go:131] duration metric: took 11.195ms to wait for apiserver health ...
	I0910 18:43:14.711931   10084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 18:43:14.861690   10084 request.go:632] Waited for 149.6536ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
	I0910 18:43:14.861690   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
	I0910 18:43:14.861690   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:14.861690   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:14.861690   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:14.869437   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:43:14.877370   10084 system_pods.go:59] 24 kube-system pods found
	I0910 18:43:14.877370   10084 system_pods.go:61] "coredns-6f6b679f8f-fsbwc" [d8424acd-9e8d-42f0-bb05-3c81f13ce95f] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "coredns-6f6b679f8f-ntqxc" [255790ad-d8ab-404b-8a56-d4b0d4749c5f] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "etcd-ha-301400" [53ecd514-99fb-4e66-990a-1aae70a58578] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "etcd-ha-301400-m02" [53e8cab6-7a5d-481e-8147-034a63ded164] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "etcd-ha-301400-m03" [cd530f29-da8a-4b9b-a9c6-a93c637af337] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kindnet-7zqv2" [5b0541a3-83c5-4e1d-8349-dd260cb795ed] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kindnet-c72m2" [c1cda00f-a399-41b6-84a0-083a1e600757] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kindnet-jv4nt" [17e701eb-dfb9-4f30-8cd6-817e85ccef65] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-apiserver-ha-301400" [8616d1fb-0711-40db-bef3-3f3eb9caf056] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-apiserver-ha-301400-m02" [de432e1e-2312-4812-bbee-89731ea35356] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-apiserver-ha-301400-m03" [93058819-1974-4514-ac14-43eabf15c9fc] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-controller-manager-ha-301400" [f59e11f0-96f0-4cb8-878f-9e5076f0a5f5] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-controller-manager-ha-301400-m02" [b48f9144-ac88-4139-a35f-0f7d0690821f] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-controller-manager-ha-301400-m03" [e32de6c3-1398-437f-a942-8f8323461e4a] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-proxy-hqkvv" [f278de26-1d40-4bff-8681-779d4641ac03] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-proxy-jczrq" [d3cf8bce-7a23-4561-b4d5-8bbab4244624] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-proxy-sh5jk" [18240255-e421-4430-b4bb-49ff7b5430d8] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-scheduler-ha-301400" [08507d01-54f8-4cbe-84bb-aa4e06c16520] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-scheduler-ha-301400-m02" [3beb7194-4252-49ed-8476-f624db9af5fa] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-scheduler-ha-301400-m03" [58f0444b-c1f8-4a39-804c-5c05f79010a2] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-vip-ha-301400" [fe68ce21-9a06-4d4a-9dc1-d36e76a54ef1] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-vip-ha-301400-m02" [b23f424b-46c2-4f19-9175-d061bb1966e5] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-vip-ha-301400-m03" [7f449d83-55ba-4467-89d0-abb1d20b4707] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "storage-provisioner" [9d7be235-eee4-4f13-bc1c-7777806e77f2] Running
	I0910 18:43:14.877370   10084 system_pods.go:74] duration metric: took 165.4275ms to wait for pod list to return data ...
	I0910 18:43:14.877370   10084 default_sa.go:34] waiting for default service account to be created ...
	I0910 18:43:15.049898   10084 request.go:632] Waited for 172.3658ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/default/serviceaccounts
	I0910 18:43:15.049898   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/default/serviceaccounts
	I0910 18:43:15.049898   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:15.049898   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:15.049898   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:15.054389   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:15.055269   10084 default_sa.go:45] found service account: "default"
	I0910 18:43:15.055269   10084 default_sa.go:55] duration metric: took 177.8867ms for default service account to be created ...
	I0910 18:43:15.055269   10084 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 18:43:15.252498   10084 request.go:632] Waited for 197.1081ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
	I0910 18:43:15.252849   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
	I0910 18:43:15.252849   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:15.252849   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:15.252849   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:15.260537   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:43:15.272068   10084 system_pods.go:86] 24 kube-system pods found
	I0910 18:43:15.272068   10084 system_pods.go:89] "coredns-6f6b679f8f-fsbwc" [d8424acd-9e8d-42f0-bb05-3c81f13ce95f] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "coredns-6f6b679f8f-ntqxc" [255790ad-d8ab-404b-8a56-d4b0d4749c5f] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "etcd-ha-301400" [53ecd514-99fb-4e66-990a-1aae70a58578] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "etcd-ha-301400-m02" [53e8cab6-7a5d-481e-8147-034a63ded164] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "etcd-ha-301400-m03" [cd530f29-da8a-4b9b-a9c6-a93c637af337] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "kindnet-7zqv2" [5b0541a3-83c5-4e1d-8349-dd260cb795ed] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "kindnet-c72m2" [c1cda00f-a399-41b6-84a0-083a1e600757] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "kindnet-jv4nt" [17e701eb-dfb9-4f30-8cd6-817e85ccef65] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "kube-apiserver-ha-301400" [8616d1fb-0711-40db-bef3-3f3eb9caf056] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "kube-apiserver-ha-301400-m02" [de432e1e-2312-4812-bbee-89731ea35356] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "kube-apiserver-ha-301400-m03" [93058819-1974-4514-ac14-43eabf15c9fc] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "kube-controller-manager-ha-301400" [f59e11f0-96f0-4cb8-878f-9e5076f0a5f5] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "kube-controller-manager-ha-301400-m02" [b48f9144-ac88-4139-a35f-0f7d0690821f] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "kube-controller-manager-ha-301400-m03" [e32de6c3-1398-437f-a942-8f8323461e4a] Running
	I0910 18:43:15.272605   10084 system_pods.go:89] "kube-proxy-hqkvv" [f278de26-1d40-4bff-8681-779d4641ac03] Running
	I0910 18:43:15.272605   10084 system_pods.go:89] "kube-proxy-jczrq" [d3cf8bce-7a23-4561-b4d5-8bbab4244624] Running
	I0910 18:43:15.272605   10084 system_pods.go:89] "kube-proxy-sh5jk" [18240255-e421-4430-b4bb-49ff7b5430d8] Running
	I0910 18:43:15.272605   10084 system_pods.go:89] "kube-scheduler-ha-301400" [08507d01-54f8-4cbe-84bb-aa4e06c16520] Running
	I0910 18:43:15.272689   10084 system_pods.go:89] "kube-scheduler-ha-301400-m02" [3beb7194-4252-49ed-8476-f624db9af5fa] Running
	I0910 18:43:15.272689   10084 system_pods.go:89] "kube-scheduler-ha-301400-m03" [58f0444b-c1f8-4a39-804c-5c05f79010a2] Running
	I0910 18:43:15.272689   10084 system_pods.go:89] "kube-vip-ha-301400" [fe68ce21-9a06-4d4a-9dc1-d36e76a54ef1] Running
	I0910 18:43:15.272689   10084 system_pods.go:89] "kube-vip-ha-301400-m02" [b23f424b-46c2-4f19-9175-d061bb1966e5] Running
	I0910 18:43:15.272689   10084 system_pods.go:89] "kube-vip-ha-301400-m03" [7f449d83-55ba-4467-89d0-abb1d20b4707] Running
	I0910 18:43:15.272760   10084 system_pods.go:89] "storage-provisioner" [9d7be235-eee4-4f13-bc1c-7777806e77f2] Running
	I0910 18:43:15.272760   10084 system_pods.go:126] duration metric: took 217.477ms to wait for k8s-apps to be running ...
	I0910 18:43:15.272760   10084 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 18:43:15.280537   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:43:15.305384   10084 system_svc.go:56] duration metric: took 32.6218ms WaitForService to wait for kubelet
	I0910 18:43:15.305384   10084 kubeadm.go:582] duration metric: took 27.3117004s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:43:15.306372   10084 node_conditions.go:102] verifying NodePressure condition ...
	I0910 18:43:15.455159   10084 request.go:632] Waited for 148.7523ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes
	I0910 18:43:15.455331   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes
	I0910 18:43:15.455331   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:15.455331   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:15.455331   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:15.459970   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:15.461750   10084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:43:15.461750   10084 node_conditions.go:123] node cpu capacity is 2
	I0910 18:43:15.461822   10084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:43:15.461822   10084 node_conditions.go:123] node cpu capacity is 2
	I0910 18:43:15.461822   10084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:43:15.461888   10084 node_conditions.go:123] node cpu capacity is 2
	I0910 18:43:15.461888   10084 node_conditions.go:105] duration metric: took 155.5052ms to run NodePressure ...
	I0910 18:43:15.461888   10084 start.go:241] waiting for startup goroutines ...
	I0910 18:43:15.461979   10084 start.go:255] writing updated cluster config ...
	I0910 18:43:15.472978   10084 ssh_runner.go:195] Run: rm -f paused
	I0910 18:43:15.596394   10084 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 18:43:15.600316   10084 out.go:177] * Done! kubectl is now configured to use "ha-301400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 10 18:36:14 ha-301400 dockerd[1441]: time="2024-09-10T18:36:14.960365290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:36:15 ha-301400 dockerd[1441]: time="2024-09-10T18:36:15.073016116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:36:15 ha-301400 dockerd[1441]: time="2024-09-10T18:36:15.073305739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:36:15 ha-301400 dockerd[1441]: time="2024-09-10T18:36:15.073350243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:36:15 ha-301400 dockerd[1441]: time="2024-09-10T18:36:15.073477453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:43:49 ha-301400 dockerd[1441]: time="2024-09-10T18:43:49.729840003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:43:49 ha-301400 dockerd[1441]: time="2024-09-10T18:43:49.730778566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:43:49 ha-301400 dockerd[1441]: time="2024-09-10T18:43:49.730812868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:43:49 ha-301400 dockerd[1441]: time="2024-09-10T18:43:49.731085886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:43:49 ha-301400 dockerd[1441]: time="2024-09-10T18:43:49.739896770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:43:49 ha-301400 dockerd[1441]: time="2024-09-10T18:43:49.740031078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:43:49 ha-301400 dockerd[1441]: time="2024-09-10T18:43:49.740119284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:43:49 ha-301400 dockerd[1441]: time="2024-09-10T18:43:49.740357600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:43:49 ha-301400 cri-dockerd[1332]: time="2024-09-10T18:43:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2d41fd59c1d9af29da14430b62618316df34c9c6f383cfa936a194032efb3d41/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 10 18:43:49 ha-301400 cri-dockerd[1332]: time="2024-09-10T18:43:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/47a3e0c452177ecf58bdad369c0621c07eb658c2bfde06683af2f9c11ab8bf4b/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 10 18:43:51 ha-301400 cri-dockerd[1332]: time="2024-09-10T18:43:51Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Sep 10 18:43:51 ha-301400 dockerd[1441]: time="2024-09-10T18:43:51.620315979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:43:51 ha-301400 dockerd[1441]: time="2024-09-10T18:43:51.620428886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:43:51 ha-301400 dockerd[1441]: time="2024-09-10T18:43:51.620447988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:43:51 ha-301400 dockerd[1441]: time="2024-09-10T18:43:51.620595597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:43:51 ha-301400 cri-dockerd[1332]: time="2024-09-10T18:43:51Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Image is up to date for gcr.io/k8s-minikube/busybox:1.28"
	Sep 10 18:43:52 ha-301400 dockerd[1441]: time="2024-09-10T18:43:52.107272350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:43:52 ha-301400 dockerd[1441]: time="2024-09-10T18:43:52.107400058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:43:52 ha-301400 dockerd[1441]: time="2024-09-10T18:43:52.107430460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:43:52 ha-301400 dockerd[1441]: time="2024-09-10T18:43:52.107539667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1ab47a8d691f1       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   47a3e0c452177       busybox-7dff88458-wbkmw
	edc8f4528b757       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   2d41fd59c1d9a       busybox-7dff88458-d2tcx
	34acad6b875b8       cbb01a7bd410d                                                                                         8 minutes ago        Running             coredns                   0                   6873bb9deffdb       coredns-6f6b679f8f-ntqxc
	bea32c778c54a       cbb01a7bd410d                                                                                         8 minutes ago        Running             coredns                   0                   92aa8e8846ef1       coredns-6f6b679f8f-fsbwc
	a32b3328b2f73       6e38f40d628db                                                                                         8 minutes ago        Running             storage-provisioner       0                   b2896be8301c2       storage-provisioner
	2bf8ec4096587       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              8 minutes ago        Running             kindnet-cni               0                   818bfadbd7a45       kindnet-7zqv2
	bed0fdc399e60       ad83b2ca7b09e                                                                                         8 minutes ago        Running             kube-proxy                0                   2e8aa95bd74fb       kube-proxy-sh5jk
	6f1c626fb447e       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     9 minutes ago        Running             kube-vip                  0                   2c1eba63c15e0       kube-vip-ha-301400
	54f16f39e60d6       604f5db92eaa8                                                                                         9 minutes ago        Running             kube-apiserver            0                   78b1face6c00d       kube-apiserver-ha-301400
	0a05e60cd24cd       1766f54c897f0                                                                                         9 minutes ago        Running             kube-scheduler            0                   6e9a59232ceae       kube-scheduler-ha-301400
	43a1ed13d84ac       2e96e5913fc06                                                                                         9 minutes ago        Running             etcd                      0                   a4bc6603ad4bd       etcd-ha-301400
	8285765ba9cc8       045733566833c                                                                                         9 minutes ago        Running             kube-controller-manager   0                   29c399af97983       kube-controller-manager-ha-301400
	
	
	==> coredns [34acad6b875b] <==
	[INFO] 10.244.0.5:54343 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.001863717s
	[INFO] 10.244.0.4:38286 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000365923s
	[INFO] 10.244.0.4:52380 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.034212755s
	[INFO] 10.244.0.4:48204 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000518133s
	[INFO] 10.244.0.4:35151 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000270117s
	[INFO] 10.244.1.2:50300 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.015811596s
	[INFO] 10.244.1.2:39428 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000286118s
	[INFO] 10.244.1.2:48752 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084906s
	[INFO] 10.244.1.2:50945 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000053603s
	[INFO] 10.244.1.2:32923 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060704s
	[INFO] 10.244.0.5:59533 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000396725s
	[INFO] 10.244.0.5:52236 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130708s
	[INFO] 10.244.0.5:51401 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00015141s
	[INFO] 10.244.0.4:33144 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108007s
	[INFO] 10.244.0.4:44342 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000134408s
	[INFO] 10.244.0.4:51342 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000264117s
	[INFO] 10.244.1.2:60885 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093806s
	[INFO] 10.244.0.5:54868 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188812s
	[INFO] 10.244.0.5:60478 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000212314s
	[INFO] 10.244.0.4:54087 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097807s
	[INFO] 10.244.0.4:59433 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.001138172s
	[INFO] 10.244.0.4:35612 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114207s
	[INFO] 10.244.1.2:37333 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000398525s
	[INFO] 10.244.0.5:40458 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000498931s
	[INFO] 10.244.0.5:52873 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000213813s
	
	
	==> coredns [bea32c778c54] <==
	[INFO] 10.244.0.5:33067 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000308319s
	[INFO] 10.244.0.4:35470 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111907s
	[INFO] 10.244.0.4:52824 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.034732888s
	[INFO] 10.244.0.4:40307 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130108s
	[INFO] 10.244.0.4:46203 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117908s
	[INFO] 10.244.1.2:48152 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108007s
	[INFO] 10.244.1.2:46265 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000138809s
	[INFO] 10.244.1.2:60351 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114307s
	[INFO] 10.244.0.5:57021 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000273317s
	[INFO] 10.244.0.5:33161 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014911s
	[INFO] 10.244.0.5:51861 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.002989688s
	[INFO] 10.244.0.5:47423 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000219714s
	[INFO] 10.244.0.5:33162 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000260816s
	[INFO] 10.244.0.4:51899 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093806s
	[INFO] 10.244.1.2:45956 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209913s
	[INFO] 10.244.1.2:40098 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133408s
	[INFO] 10.244.1.2:51724 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081805s
	[INFO] 10.244.0.5:53210 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000219614s
	[INFO] 10.244.0.5:46687 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00015281s
	[INFO] 10.244.0.4:37811 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000207813s
	[INFO] 10.244.1.2:51061 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000625439s
	[INFO] 10.244.1.2:44257 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000199213s
	[INFO] 10.244.1.2:34769 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000157909s
	[INFO] 10.244.0.5:60343 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000288018s
	[INFO] 10.244.0.5:45411 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00015851s
	
	
	==> describe nodes <==
	Name:               ha-301400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-301400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-301400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T18_35_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:35:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-301400
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 18:44:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 18:44:16 +0000   Tue, 10 Sep 2024 18:35:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 18:44:16 +0000   Tue, 10 Sep 2024 18:35:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 18:44:16 +0000   Tue, 10 Sep 2024 18:35:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 18:44:16 +0000   Tue, 10 Sep 2024 18:36:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.216.168
	  Hostname:    ha-301400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 9674892abe054124a29fe78dad3b3ea8
	  System UUID:                a46a0773-6043-5943-b07f-4fc55231d20c
	  Boot ID:                    d1445ab4-da2a-4fe2-a7f4-acb5e9b26c6c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-d2tcx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  default                     busybox-7dff88458-wbkmw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 coredns-6f6b679f8f-fsbwc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m59s
	  kube-system                 coredns-6f6b679f8f-ntqxc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m59s
	  kube-system                 etcd-ha-301400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m5s
	  kube-system                 kindnet-7zqv2                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m
	  kube-system                 kube-apiserver-ha-301400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m5s
	  kube-system                 kube-controller-manager-ha-301400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m5s
	  kube-system                 kube-proxy-sh5jk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m
	  kube-system                 kube-scheduler-ha-301400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m5s
	  kube-system                 kube-vip-ha-301400                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m5s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m58s  kube-proxy       
	  Normal  Starting                 9m5s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m5s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m5s   kubelet          Node ha-301400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m5s   kubelet          Node ha-301400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m5s   kubelet          Node ha-301400 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m     node-controller  Node ha-301400 event: Registered Node ha-301400 in Controller
	  Normal  NodeReady                8m38s  kubelet          Node ha-301400 status is now: NodeReady
	  Normal  RegisteredNode           5m37s  node-controller  Node ha-301400 event: Registered Node ha-301400 in Controller
	  Normal  RegisteredNode           118s   node-controller  Node ha-301400 event: Registered Node ha-301400 in Controller
	
	
	Name:               ha-301400-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-301400-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-301400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T18_39_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:39:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-301400-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 18:44:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 18:44:10 +0000   Tue, 10 Sep 2024 18:39:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 18:44:10 +0000   Tue, 10 Sep 2024 18:39:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 18:44:10 +0000   Tue, 10 Sep 2024 18:39:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 18:44:10 +0000   Tue, 10 Sep 2024 18:39:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.215.2
	  Hostname:    ha-301400-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 776b1cd39f8142e59946104bc632626a
	  System UUID:                994f6dcd-0ea0-c641-b486-db3369e52782
	  Boot ID:                    216e4cd9-8196-4488-b13c-07d9b6ebe988
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lnwzg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 etcd-ha-301400-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m47s
	  kube-system                 kindnet-jv4nt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m47s
	  kube-system                 kube-apiserver-ha-301400-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-controller-manager-ha-301400-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-proxy-hqkvv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m47s
	  kube-system                 kube-scheduler-ha-301400-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m44s
	  kube-system                 kube-vip-ha-301400-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m43s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m48s (x8 over 5m48s)  kubelet          Node ha-301400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m48s (x8 over 5m48s)  kubelet          Node ha-301400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m48s (x7 over 5m48s)  kubelet          Node ha-301400-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m45s                  node-controller  Node ha-301400-m02 event: Registered Node ha-301400-m02 in Controller
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-301400-m02 event: Registered Node ha-301400-m02 in Controller
	  Normal  RegisteredNode           119s                   node-controller  Node ha-301400-m02 event: Registered Node ha-301400-m02 in Controller
	
	
	Name:               ha-301400-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-301400-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-301400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T18_42_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:42:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-301400-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 18:44:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 18:43:12 +0000   Tue, 10 Sep 2024 18:42:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 18:43:12 +0000   Tue, 10 Sep 2024 18:42:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 18:43:12 +0000   Tue, 10 Sep 2024 18:42:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 18:43:12 +0000   Tue, 10 Sep 2024 18:43:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.217.146
	  Hostname:    ha-301400-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 61aa00a070b7471daf3aa526379a1306
	  System UUID:                487dcf97-6c67-3d48-9807-df89639a7980
	  Boot ID:                    8b928b11-5293-478d-ad30-080a773cea30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-301400-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m7s
	  kube-system                 kindnet-c72m2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m10s
	  kube-system                 kube-apiserver-ha-301400-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-controller-manager-ha-301400-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-proxy-jczrq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-scheduler-ha-301400-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-vip-ha-301400-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node ha-301400-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node ha-301400-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x7 over 2m10s)  kubelet          Node ha-301400-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m7s                   node-controller  Node ha-301400-m03 event: Registered Node ha-301400-m03 in Controller
	  Normal  RegisteredNode           2m5s                   node-controller  Node ha-301400-m03 event: Registered Node ha-301400-m03 in Controller
	  Normal  RegisteredNode           119s                   node-controller  Node ha-301400-m03 event: Registered Node ha-301400-m03 in Controller
	
	
	==> dmesg <==
	[  +1.804630] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[Sep10 18:34] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +41.794665] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.153729] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[Sep10 18:35] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[  +0.109668] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.492128] systemd-fstab-generator[1046]: Ignoring "noauto" option for root device
	[  +0.187301] systemd-fstab-generator[1058]: Ignoring "noauto" option for root device
	[  +0.214772] systemd-fstab-generator[1072]: Ignoring "noauto" option for root device
	[  +2.824271] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.190445] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.184721] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.260646] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[ +10.717407] systemd-fstab-generator[1426]: Ignoring "noauto" option for root device
	[  +0.104937] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.465855] systemd-fstab-generator[1683]: Ignoring "noauto" option for root device
	[  +5.308151] systemd-fstab-generator[1826]: Ignoring "noauto" option for root device
	[  +0.087881] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.222127] kauditd_printk_skb: 67 callbacks suppressed
	[  +2.805126] systemd-fstab-generator[2318]: Ignoring "noauto" option for root device
	[  +6.508337] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.377050] kauditd_printk_skb: 29 callbacks suppressed
	[Sep10 18:39] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [43a1ed13d84a] <==
	{"level":"info","ts":"2024-09-10T18:42:45.613676Z","caller":"traceutil/trace.go:171","msg":"trace[195481170] transaction","detail":"{read_only:false; response_revision:1387; number_of_response:1; }","duration":"212.216002ms","start":"2024-09-10T18:42:45.401445Z","end":"2024-09-10T18:42:45.613661Z","steps":["trace[195481170] 'process raft request'  (duration: 42.009651ms)","trace[195481170] 'compare'  (duration: 169.411697ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-10T18:42:45.868837Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.674553ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T18:42:45.869152Z","caller":"traceutil/trace.go:171","msg":"trace[536560508] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1391; }","duration":"119.004976ms","start":"2024-09-10T18:42:45.750134Z","end":"2024-09-10T18:42:45.869139Z","steps":["trace[536560508] 'range keys from in-memory index tree'  (duration: 118.662653ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T18:42:45.959483Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"461b675d3a83cb00","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-09-10T18:42:46.980322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f8d1d2f4692bc29 switched to configuration voters=(5051745057137478400 8433361632936279512 11496877512631434281)"}
	{"level":"info","ts":"2024-09-10T18:42:46.980462Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"2991d7bcd6df3228","local-member-id":"9f8d1d2f4692bc29"}
	{"level":"info","ts":"2024-09-10T18:42:46.980488Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"9f8d1d2f4692bc29","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"461b675d3a83cb00"}
	{"level":"warn","ts":"2024-09-10T18:42:52.286917Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"750951b929933dd8","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"180.972373ms"}
	{"level":"warn","ts":"2024-09-10T18:42:52.286976Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"461b675d3a83cb00","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"181.037177ms"}
	{"level":"info","ts":"2024-09-10T18:42:52.394649Z","caller":"traceutil/trace.go:171","msg":"trace[2030700355] transaction","detail":"{read_only:false; response_revision:1447; number_of_response:1; }","duration":"462.134162ms","start":"2024-09-10T18:42:51.932499Z","end":"2024-09-10T18:42:52.394633Z","steps":["trace[2030700355] 'process raft request'  (duration: 462.021454ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T18:42:52.394996Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-10T18:42:51.932479Z","time spent":"462.238669ms","remote":"127.0.0.1:48532","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4422,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/ha-301400-m03\" mod_revision:1434 > success:<request_put:<key:\"/registry/minions/ha-301400-m03\" value_size:4383 >> failure:<request_range:<key:\"/registry/minions/ha-301400-m03\" > >"}
	{"level":"info","ts":"2024-09-10T18:42:52.398556Z","caller":"traceutil/trace.go:171","msg":"trace[2112800893] linearizableReadLoop","detail":"{readStateIndex:1617; appliedIndex:1618; }","duration":"352.470642ms","start":"2024-09-10T18:42:52.046072Z","end":"2024-09-10T18:42:52.398543Z","steps":["trace[2112800893] 'read index received'  (duration: 352.467542ms)","trace[2112800893] 'applied index is now lower than readState.Index'  (duration: 2.3µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-10T18:42:52.398836Z","caller":"traceutil/trace.go:171","msg":"trace[2113962968] transaction","detail":"{read_only:false; response_revision:1448; number_of_response:1; }","duration":"355.707661ms","start":"2024-09-10T18:42:52.043111Z","end":"2024-09-10T18:42:52.398819Z","steps":["trace[2113962968] 'process raft request'  (duration: 355.54585ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T18:42:52.399957Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-10T18:42:52.043094Z","time spent":"356.818336ms","remote":"127.0.0.1:48612","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":540,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/ha-301400-m03\" mod_revision:1338 > success:<request_put:<key:\"/registry/leases/kube-node-lease/ha-301400-m03\" value_size:486 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/ha-301400-m03\" > >"}
	{"level":"warn","ts":"2024-09-10T18:42:52.399263Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"353.048781ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T18:42:52.400610Z","caller":"traceutil/trace.go:171","msg":"trace[1984044122] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1448; }","duration":"354.530281ms","start":"2024-09-10T18:42:52.046067Z","end":"2024-09-10T18:42:52.400598Z","steps":["trace[1984044122] 'agreement among raft nodes before linearized reading'  (duration: 353.023479ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T18:42:52.403897Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.95102ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-301400-m03\" ","response":"range_response_count:1 size:4437"}
	{"level":"info","ts":"2024-09-10T18:42:52.403945Z","caller":"traceutil/trace.go:171","msg":"trace[346170321] range","detail":"{range_begin:/registry/minions/ha-301400-m03; range_end:; response_count:1; response_revision:1448; }","duration":"228.001423ms","start":"2024-09-10T18:42:52.175935Z","end":"2024-09-10T18:42:52.403937Z","steps":["trace[346170321] 'agreement among raft nodes before linearized reading'  (duration: 227.926718ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T18:42:53.948476Z","caller":"traceutil/trace.go:171","msg":"trace[708802482] transaction","detail":"{read_only:false; response_revision:1459; number_of_response:1; }","duration":"141.09314ms","start":"2024-09-10T18:42:53.807366Z","end":"2024-09-10T18:42:53.948459Z","steps":["trace[708802482] 'process raft request'  (duration: 140.757917ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T18:42:54.166442Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.907104ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T18:42:54.166577Z","caller":"traceutil/trace.go:171","msg":"trace[32372150] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1459; }","duration":"120.062414ms","start":"2024-09-10T18:42:54.046501Z","end":"2024-09-10T18:42:54.166563Z","steps":["trace[32372150] 'range keys from in-memory index tree'  (duration: 119.885702ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T18:43:49.037529Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.797245ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-7dff88458-d2tcx\" ","response":"range_response_count:1 size:3032"}
	{"level":"info","ts":"2024-09-10T18:43:49.037584Z","caller":"traceutil/trace.go:171","msg":"trace[1718299917] range","detail":"{range_begin:/registry/pods/default/busybox-7dff88458-d2tcx; range_end:; response_count:1; response_revision:1670; }","duration":"101.862049ms","start":"2024-09-10T18:43:48.935709Z","end":"2024-09-10T18:43:49.037571Z","steps":["trace[1718299917] 'agreement among raft nodes before linearized reading'  (duration: 101.737741ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T18:43:49.037697Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.701304ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2024-09-10T18:43:49.037721Z","caller":"traceutil/trace.go:171","msg":"trace[1382697383] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:1670; }","duration":"102.724806ms","start":"2024-09-10T18:43:48.934989Z","end":"2024-09-10T18:43:49.037714Z","steps":["trace[1382697383] 'agreement among raft nodes before linearized reading'  (duration: 102.682603ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:44:51 up 11 min,  0 users,  load average: 0.63, 0.50, 0.28
	Linux ha-301400 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2bf8ec409658] <==
	I0910 18:44:11.015480       1 main.go:299] handling current node
	I0910 18:44:21.017055       1 main.go:295] Handling node with IPs: map[172.31.216.168:{}]
	I0910 18:44:21.017125       1 main.go:299] handling current node
	I0910 18:44:21.017543       1 main.go:295] Handling node with IPs: map[172.31.215.2:{}]
	I0910 18:44:21.017577       1 main.go:322] Node ha-301400-m02 has CIDR [10.244.1.0/24] 
	I0910 18:44:21.017710       1 main.go:295] Handling node with IPs: map[172.31.217.146:{}]
	I0910 18:44:21.017824       1 main.go:322] Node ha-301400-m03 has CIDR [10.244.2.0/24] 
	I0910 18:44:31.015035       1 main.go:295] Handling node with IPs: map[172.31.217.146:{}]
	I0910 18:44:31.015487       1 main.go:322] Node ha-301400-m03 has CIDR [10.244.2.0/24] 
	I0910 18:44:31.015718       1 main.go:295] Handling node with IPs: map[172.31.216.168:{}]
	I0910 18:44:31.015818       1 main.go:299] handling current node
	I0910 18:44:31.015833       1 main.go:295] Handling node with IPs: map[172.31.215.2:{}]
	I0910 18:44:31.015840       1 main.go:322] Node ha-301400-m02 has CIDR [10.244.1.0/24] 
	I0910 18:44:41.015565       1 main.go:295] Handling node with IPs: map[172.31.216.168:{}]
	I0910 18:44:41.015759       1 main.go:299] handling current node
	I0910 18:44:41.015809       1 main.go:295] Handling node with IPs: map[172.31.215.2:{}]
	I0910 18:44:41.015834       1 main.go:322] Node ha-301400-m02 has CIDR [10.244.1.0/24] 
	I0910 18:44:41.015958       1 main.go:295] Handling node with IPs: map[172.31.217.146:{}]
	I0910 18:44:41.016125       1 main.go:322] Node ha-301400-m03 has CIDR [10.244.2.0/24] 
	I0910 18:44:51.023404       1 main.go:295] Handling node with IPs: map[172.31.216.168:{}]
	I0910 18:44:51.023437       1 main.go:299] handling current node
	I0910 18:44:51.023453       1 main.go:295] Handling node with IPs: map[172.31.215.2:{}]
	I0910 18:44:51.023460       1 main.go:322] Node ha-301400-m02 has CIDR [10.244.1.0/24] 
	I0910 18:44:51.023640       1 main.go:295] Handling node with IPs: map[172.31.217.146:{}]
	I0910 18:44:51.023649       1 main.go:322] Node ha-301400-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [54f16f39e60d] <==
	I0910 18:35:45.598882       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0910 18:35:46.255647       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0910 18:35:46.407648       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0910 18:35:46.451775       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0910 18:35:46.473079       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0910 18:35:51.753788       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0910 18:35:52.011539       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0910 18:42:42.329092       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 78.206µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0910 18:42:42.331452       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="PATCH" URI="/api/v1/namespaces/default/events/ha-301400-m03.17f3f62e68851b41" auditID="67956b62-0a18-441e-ae27-9aaff4086808"
	E0910 18:42:42.333093       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="7.801µs" method="PATCH" path="/api/v1/namespaces/default/events/ha-301400-m03.17f3f62e68851b41" result=null
	E0910 18:43:55.757843       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64492: use of closed network connection
	E0910 18:43:57.270356       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64494: use of closed network connection
	E0910 18:43:57.700306       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64496: use of closed network connection
	E0910 18:43:58.200897       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64498: use of closed network connection
	E0910 18:43:58.634175       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64500: use of closed network connection
	E0910 18:43:59.113707       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64502: use of closed network connection
	E0910 18:43:59.564522       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64504: use of closed network connection
	E0910 18:43:59.981102       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64507: use of closed network connection
	E0910 18:44:00.425883       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64509: use of closed network connection
	E0910 18:44:01.204715       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64512: use of closed network connection
	E0910 18:44:11.641413       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64514: use of closed network connection
	E0910 18:44:12.057864       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64517: use of closed network connection
	E0910 18:44:22.479215       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64519: use of closed network connection
	E0910 18:44:22.913638       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64522: use of closed network connection
	E0910 18:44:33.334980       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64524: use of closed network connection
	
	
	==> kube-controller-manager [8285765ba9cc] <==
	I0910 18:42:52.397682       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m03"
	I0910 18:42:53.006569       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m03"
	I0910 18:42:53.097989       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m03"
	I0910 18:43:09.585111       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m03"
	I0910 18:43:09.622638       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m03"
	I0910 18:43:10.001173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m03"
	I0910 18:43:12.845962       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m03"
	I0910 18:43:48.696676       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="143.646719ms"
	I0910 18:43:48.912324       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="214.700627ms"
	I0910 18:43:49.207903       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="295.498176ms"
	I0910 18:43:49.239098       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="31.037156ms"
	I0910 18:43:49.239352       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="129.308µs"
	I0910 18:43:49.371390       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.759697ms"
	I0910 18:43:49.372139       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.503µs"
	I0910 18:43:50.006993       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="842.552µs"
	I0910 18:43:50.024590       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="63.604µs"
	I0910 18:43:50.042825       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.003µs"
	I0910 18:43:52.087434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="69.849ms"
	I0910 18:43:52.087791       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="281.618µs"
	I0910 18:43:52.207426       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="73.239313ms"
	I0910 18:43:52.207532       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.604µs"
	I0910 18:43:53.352753       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="71.28229ms"
	I0910 18:43:53.353419       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="498.631µs"
	I0910 18:44:10.224580       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m02"
	I0910 18:44:16.478468       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400"
	
	
	==> kube-proxy [bed0fdc399e6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 18:35:53.062905       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 18:35:53.077687       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.31.216.168"]
	E0910 18:35:53.077755       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 18:35:53.148721       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 18:35:53.148853       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 18:35:53.148893       1 server_linux.go:169] "Using iptables Proxier"
	I0910 18:35:53.153671       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 18:35:53.154563       1 server.go:483] "Version info" version="v1.31.0"
	I0910 18:35:53.154592       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:35:53.156134       1 config.go:197] "Starting service config controller"
	I0910 18:35:53.156464       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 18:35:53.156501       1 config.go:104] "Starting endpoint slice config controller"
	I0910 18:35:53.156588       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 18:35:53.157420       1 config.go:326] "Starting node config controller"
	I0910 18:35:53.157448       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 18:35:53.257160       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 18:35:53.257177       1 shared_informer.go:320] Caches are synced for service config
	I0910 18:35:53.257588       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0a05e60cd24c] <==
	E0910 18:35:44.486607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:35:44.498054       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0910 18:35:44.498091       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0910 18:35:44.524555       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0910 18:35:44.524693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 18:35:44.556428       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0910 18:35:44.556503       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0910 18:35:44.639868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0910 18:35:44.639917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:35:44.690820       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0910 18:35:44.690871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:35:44.705642       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0910 18:35:44.705961       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 18:35:44.705917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0910 18:35:44.706143       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 18:35:44.835403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 18:35:44.835506       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0910 18:35:47.540732       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0910 18:42:41.697577       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-c72m2\": pod kindnet-c72m2 is already assigned to node \"ha-301400-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-c72m2" node="ha-301400-m03"
	E0910 18:42:41.697924       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-c72m2\": pod kindnet-c72m2 is already assigned to node \"ha-301400-m03\"" pod="kube-system/kindnet-c72m2"
	I0910 18:42:41.699423       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-c72m2" node="ha-301400-m03"
	E0910 18:42:41.698499       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sdw96\": pod kube-proxy-sdw96 is already assigned to node \"ha-301400-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sdw96" node="ha-301400-m03"
	E0910 18:42:41.702274       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 78996f3e-ec5e-4e51-a7af-e3297da1afbd(kube-system/kube-proxy-sdw96) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-sdw96"
	E0910 18:42:41.704301       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sdw96\": pod kube-proxy-sdw96 is already assigned to node \"ha-301400-m03\"" pod="kube-system/kube-proxy-sdw96"
	I0910 18:42:41.704407       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-sdw96" node="ha-301400-m03"
	
	
	==> kubelet <==
	Sep 10 18:41:46 ha-301400 kubelet[2325]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 18:41:46 ha-301400 kubelet[2325]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 18:41:46 ha-301400 kubelet[2325]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 18:42:46 ha-301400 kubelet[2325]: E0910 18:42:46.503966    2325 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 18:42:46 ha-301400 kubelet[2325]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 18:42:46 ha-301400 kubelet[2325]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 18:42:46 ha-301400 kubelet[2325]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 18:42:46 ha-301400 kubelet[2325]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 18:43:46 ha-301400 kubelet[2325]: E0910 18:43:46.503350    2325 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 18:43:46 ha-301400 kubelet[2325]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 18:43:46 ha-301400 kubelet[2325]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 18:43:46 ha-301400 kubelet[2325]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 18:43:46 ha-301400 kubelet[2325]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 18:43:48 ha-301400 kubelet[2325]: I0910 18:43:48.695353    2325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-ntqxc" podStartSLOduration=476.69533377 podStartE2EDuration="7m56.69533377s" podCreationTimestamp="2024-09-10 18:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-10 18:36:15.798654782 +0000 UTC m=+29.587647957" watchObservedRunningTime="2024-09-10 18:43:48.69533377 +0000 UTC m=+482.484326845"
	Sep 10 18:43:48 ha-301400 kubelet[2325]: I0910 18:43:48.789367    2325 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8knc\" (UniqueName: \"kubernetes.io/projected/5f25b43f-f9f2-4b6f-94a6-3b40f2d4c8ba-kube-api-access-q8knc\") pod \"busybox-7dff88458-d2tcx\" (UID: \"5f25b43f-f9f2-4b6f-94a6-3b40f2d4c8ba\") " pod="default/busybox-7dff88458-d2tcx"
	Sep 10 18:43:48 ha-301400 kubelet[2325]: I0910 18:43:48.890462    2325 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkcsv\" (UniqueName: \"kubernetes.io/projected/2e4d338f-dcd1-4278-b254-ce6f5055690f-kube-api-access-dkcsv\") pod \"busybox-7dff88458-wbkmw\" (UID: \"2e4d338f-dcd1-4278-b254-ce6f5055690f\") " pod="default/busybox-7dff88458-wbkmw"
	Sep 10 18:43:49 ha-301400 kubelet[2325]: I0910 18:43:49.995570    2325 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47a3e0c452177ecf58bdad369c0621c07eb658c2bfde06683af2f9c11ab8bf4b"
	Sep 10 18:43:50 ha-301400 kubelet[2325]: I0910 18:43:50.013566    2325 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d41fd59c1d9af29da14430b62618316df34c9c6f383cfa936a194032efb3d41"
	Sep 10 18:43:53 ha-301400 kubelet[2325]: I0910 18:43:53.279549    2325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-d2tcx" podStartSLOduration=3.9365280289999998 podStartE2EDuration="5.279525985s" podCreationTimestamp="2024-09-10 18:43:48 +0000 UTC" firstStartedPulling="2024-09-10 18:43:50.061789448 +0000 UTC m=+483.850782523" lastFinishedPulling="2024-09-10 18:43:51.404787404 +0000 UTC m=+485.193780479" observedRunningTime="2024-09-10 18:43:52.134539667 +0000 UTC m=+485.923532842" watchObservedRunningTime="2024-09-10 18:43:53.279525985 +0000 UTC m=+487.068519160"
	Sep 10 18:44:33 ha-301400 kubelet[2325]: E0910 18:44:33.339109    2325 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:55590->127.0.0.1:37283: write tcp 127.0.0.1:55590->127.0.0.1:37283: write: broken pipe
	Sep 10 18:44:46 ha-301400 kubelet[2325]: E0910 18:44:46.509987    2325 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 18:44:46 ha-301400 kubelet[2325]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 18:44:46 ha-301400 kubelet[2325]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 18:44:46 ha-301400 kubelet[2325]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 18:44:46 ha-301400 kubelet[2325]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-301400 -n ha-301400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-301400 -n ha-301400: (10.8095953s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-301400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (63.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (256.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 node start m02 -v=7 --alsologtostderr
E0910 19:00:34.168745    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:420: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-301400 node start m02 -v=7 --alsologtostderr: exit status 1 (2m53.0031612s)

                                                
                                                
-- stdout --
	* Starting "ha-301400-m02" control-plane node in "ha-301400" cluster
	* Restarting existing hyperv VM for "ha-301400-m02" ...
	* Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	* Verifying Kubernetes components...
	* Enabled addons: 

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 19:00:05.355857    5464 out.go:345] Setting OutFile to fd 760 ...
	I0910 19:00:05.426038    5464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 19:00:05.426038    5464 out.go:358] Setting ErrFile to fd 848...
	I0910 19:00:05.426038    5464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 19:00:05.439397    5464 mustload.go:65] Loading cluster: ha-301400
	I0910 19:00:05.439985    5464 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:00:05.441437    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:00:07.407907    5464 main.go:141] libmachine: [stdout =====>] : Off
	
	I0910 19:00:07.407907    5464 main.go:141] libmachine: [stderr =====>] : 
	W0910 19:00:07.408012    5464 host.go:58] "ha-301400-m02" host status: Stopped
	I0910 19:00:07.411310    5464 out.go:177] * Starting "ha-301400-m02" control-plane node in "ha-301400" cluster
	I0910 19:00:07.413648    5464 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 19:00:07.413648    5464 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0910 19:00:07.413648    5464 cache.go:56] Caching tarball of preloaded images
	I0910 19:00:07.414293    5464 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0910 19:00:07.414293    5464 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 19:00:07.414293    5464 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json ...
	I0910 19:00:07.416128    5464 start.go:360] acquireMachinesLock for ha-301400-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 19:00:07.416128    5464 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-301400-m02"
	I0910 19:00:07.416803    5464 start.go:96] Skipping create...Using existing machine configuration
	I0910 19:00:07.416862    5464 fix.go:54] fixHost starting: m02
	I0910 19:00:07.416931    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:00:09.336310    5464 main.go:141] libmachine: [stdout =====>] : Off
	
	I0910 19:00:09.336310    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:09.336310    5464 fix.go:112] recreateIfNeeded on ha-301400-m02: state=Stopped err=<nil>
	W0910 19:00:09.336389    5464 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 19:00:09.339124    5464 out.go:177] * Restarting existing hyperv VM for "ha-301400-m02" ...
	I0910 19:00:09.341294    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-301400-m02
	I0910 19:00:12.197608    5464 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:00:12.197608    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:12.198050    5464 main.go:141] libmachine: Waiting for host to start...
	I0910 19:00:12.198050    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:00:14.277722    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:00:14.277722    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:14.277722    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:00:16.564616    5464 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:00:16.564616    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:17.579729    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:00:19.565446    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:00:19.565446    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:19.565674    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:00:21.820737    5464 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:00:21.820785    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:22.828186    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:00:24.824253    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:00:24.824253    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:24.825037    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:00:27.143629    5464 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:00:27.144647    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:28.150821    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:00:30.198105    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:00:30.198285    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:30.198285    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:00:32.455037    5464 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:00:32.455037    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:33.467497    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:00:35.491825    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:00:35.492133    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:35.492133    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:00:37.820766    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72
	
	I0910 19:00:37.821173    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:37.823958    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:00:39.784025    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:00:39.784025    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:39.784025    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:00:42.098623    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72
	
	I0910 19:00:42.098853    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:42.099226    5464 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json ...
	I0910 19:00:42.101924    5464 machine.go:93] provisionDockerMachine start ...
	I0910 19:00:42.101924    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:00:44.078010    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:00:44.078010    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:44.078117    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:00:46.377662    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72
	
	I0910 19:00:46.377662    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:46.382497    5464 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:46.382497    5464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.72 22 <nil> <nil>}
	I0910 19:00:46.383024    5464 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 19:00:46.507579    5464 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 19:00:46.507638    5464 buildroot.go:166] provisioning hostname "ha-301400-m02"
	I0910 19:00:46.507700    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:00:48.452847    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:00:48.452847    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:48.452938    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:00:50.777497    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72
	
	I0910 19:00:50.777497    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:50.781822    5464 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:50.782260    5464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.72 22 <nil> <nil>}
	I0910 19:00:50.782324    5464 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-301400-m02 && echo "ha-301400-m02" | sudo tee /etc/hostname
	I0910 19:00:50.951092    5464 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-301400-m02
	
	I0910 19:00:50.952082    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:00:52.902296    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:00:52.902296    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:52.902296    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:00:55.196606    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72
	
	I0910 19:00:55.197515    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:55.201669    5464 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:55.202559    5464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.72 22 <nil> <nil>}
	I0910 19:00:55.202559    5464 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-301400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-301400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-301400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 19:00:55.344792    5464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 19:00:55.344792    5464 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0910 19:00:55.344792    5464 buildroot.go:174] setting up certificates
	I0910 19:00:55.344792    5464 provision.go:84] configureAuth start
	I0910 19:00:55.344792    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:00:57.288132    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:00:57.288430    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:57.288430    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:00:59.616554    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72
	
	I0910 19:00:59.616554    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:00:59.616554    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:01:01.591443    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:01:01.591443    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:01.591525    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:01:03.890969    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72
	
	I0910 19:01:03.891046    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:03.891046    5464 provision.go:143] copyHostCerts
	I0910 19:01:03.891388    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0910 19:01:03.891568    5464 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0910 19:01:03.891640    5464 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0910 19:01:03.892008    5464 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0910 19:01:03.893006    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0910 19:01:03.893303    5464 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0910 19:01:03.893549    5464 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0910 19:01:03.893636    5464 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0910 19:01:03.895034    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0910 19:01:03.895088    5464 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0910 19:01:03.895088    5464 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0910 19:01:03.895088    5464 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0910 19:01:03.895778    5464 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-301400-m02 san=[127.0.0.1 172.31.210.72 ha-301400-m02 localhost minikube]
	I0910 19:01:04.203305    5464 provision.go:177] copyRemoteCerts
	I0910 19:01:04.211313    5464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 19:01:04.211313    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:01:06.151234    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:01:06.152248    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:06.152365    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:01:08.494914    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72
	
	I0910 19:01:08.494996    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:08.494996    5464 sshutil.go:53] new ssh client: &{IP:172.31.210.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\id_rsa Username:docker}
	I0910 19:01:08.600762    5464 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3890968s)
	I0910 19:01:08.600828    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0910 19:01:08.601406    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 19:01:08.650188    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0910 19:01:08.650567    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 19:01:08.696186    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0910 19:01:08.696186    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0910 19:01:08.741360    5464 provision.go:87] duration metric: took 13.395671s to configureAuth
	I0910 19:01:08.741360    5464 buildroot.go:189] setting minikube options for container-runtime
	I0910 19:01:08.742203    5464 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:01:08.742269    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:01:10.700318    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:01:10.700318    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:10.700318    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:01:13.023927    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72
	
	I0910 19:01:13.024525    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:13.029280    5464 main.go:141] libmachine: Using SSH client type: native
	I0910 19:01:13.029933    5464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.72 22 <nil> <nil>}
	I0910 19:01:13.029933    5464 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 19:01:13.157134    5464 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 19:01:13.157235    5464 buildroot.go:70] root file system type: tmpfs
	I0910 19:01:13.157467    5464 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 19:01:13.157554    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:01:15.096499    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:01:15.096499    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:15.096499    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:01:17.419841    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72
	
	I0910 19:01:17.420855    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:17.427549    5464 main.go:141] libmachine: Using SSH client type: native
	I0910 19:01:17.427549    5464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.72 22 <nil> <nil>}
	I0910 19:01:17.428210    5464 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 19:01:17.584753    5464 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 19:01:17.584880    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:01:19.567190    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:01:19.567416    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:19.567416    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:01:21.905923    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72
	
	I0910 19:01:21.905923    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:21.910628    5464 main.go:141] libmachine: Using SSH client type: native
	I0910 19:01:21.910700    5464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.72 22 <nil> <nil>}
	I0910 19:01:21.910700    5464 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 19:01:24.383155    5464 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 19:01:24.383155    5464 machine.go:96] duration metric: took 42.2784002s to provisionDockerMachine
	I0910 19:01:24.383155    5464 start.go:293] postStartSetup for "ha-301400-m02" (driver="hyperv")
	I0910 19:01:24.383155    5464 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 19:01:24.393142    5464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 19:01:24.393142    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:01:26.338886    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:01:26.338886    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:26.338962    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:01:28.654392    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72
	
	I0910 19:01:28.654392    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:28.654392    5464 sshutil.go:53] new ssh client: &{IP:172.31.210.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\id_rsa Username:docker}
	I0910 19:01:28.766806    5464 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3733719s)
	I0910 19:01:28.777083    5464 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 19:01:28.784083    5464 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 19:01:28.784192    5464 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0910 19:01:28.784698    5464 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0910 19:01:28.785890    5464 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> 47242.pem in /etc/ssl/certs
	I0910 19:01:28.785890    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /etc/ssl/certs/47242.pem
	I0910 19:01:28.796652    5464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 19:01:28.814710    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /etc/ssl/certs/47242.pem (1708 bytes)
	I0910 19:01:28.866740    5464 start.go:296] duration metric: took 4.4832853s for postStartSetup
	I0910 19:01:28.877073    5464 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0910 19:01:28.878062    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:01:30.846023    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:01:30.846023    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:30.846405    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:01:33.193913    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72
	
	I0910 19:01:33.194359    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:33.194743    5464 sshutil.go:53] new ssh client: &{IP:172.31.210.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\id_rsa Username:docker}
	I0910 19:01:33.296264    5464 ssh_runner.go:235] Completed: sudo ls --almost-all -1 /var/lib/minikube/backup: (4.4188963s)
	I0910 19:01:33.296264    5464 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0910 19:01:33.307398    5464 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0910 19:01:33.378148    5464 fix.go:56] duration metric: took 1m25.9555265s for fixHost
	I0910 19:01:33.378148    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:01:35.278657    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:01:35.278831    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:35.278831    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:01:37.549504    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72
	
	I0910 19:01:37.549988    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:37.553452    5464 main.go:141] libmachine: Using SSH client type: native
	I0910 19:01:37.554029    5464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.72 22 <nil> <nil>}
	I0910 19:01:37.554029    5464 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 19:01:37.678498    5464 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994897.899669378
	
	I0910 19:01:37.678498    5464 fix.go:216] guest clock: 1725994897.899669378
	I0910 19:01:37.678567    5464 fix.go:229] Guest: 2024-09-10 19:01:37.899669378 +0000 UTC Remote: 2024-09-10 19:01:33.3781488 +0000 UTC m=+88.093166601 (delta=4.521520578s)
	I0910 19:01:37.678637    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:01:39.641515    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:01:39.641515    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:39.641515    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:01:41.974680    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72
	
	I0910 19:01:41.974680    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:41.978823    5464 main.go:141] libmachine: Using SSH client type: native
	I0910 19:01:41.978877    5464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.72 22 <nil> <nil>}
	I0910 19:01:41.978877    5464 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1725994897
	I0910 19:01:42.125317    5464 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Sep 10 19:01:37 UTC 2024
	
	I0910 19:01:42.125317    5464 fix.go:236] clock set: Tue Sep 10 19:01:37 UTC 2024
	 (err=<nil>)
	I0910 19:01:42.125431    5464 start.go:83] releasing machines lock for "ha-301400-m02", held for 1m34.7024341s
	I0910 19:01:42.125546    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:01:44.036153    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:01:44.036153    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:44.036153    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:01:46.352177    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72
	
	I0910 19:01:46.352177    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:46.355049    5464 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0910 19:01:46.355049    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:01:46.363124    5464 ssh_runner.go:195] Run: systemctl --version
	I0910 19:01:46.363124    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 19:01:48.328226    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:01:48.328226    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:48.328226    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:01:48.359329    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:01:48.359329    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:48.359329    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:01:50.711589    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72
	
	I0910 19:01:50.711589    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:50.711589    5464 sshutil.go:53] new ssh client: &{IP:172.31.210.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\id_rsa Username:docker}
	I0910 19:01:50.744736    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72
	
	I0910 19:01:50.744736    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:50.745736    5464 sshutil.go:53] new ssh client: &{IP:172.31.210.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\id_rsa Username:docker}
	I0910 19:01:50.796109    5464 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.440763s)
	W0910 19:01:50.796109    5464 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0910 19:01:50.846372    5464 ssh_runner.go:235] Completed: systemctl --version: (4.4829491s)
	I0910 19:01:50.854798    5464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 19:01:50.863671    5464 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 19:01:50.872300    5464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 19:01:50.900866    5464 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 19:01:50.900866    5464 start.go:495] detecting cgroup driver to use...
	I0910 19:01:50.900866    5464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0910 19:01:50.914041    5464 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0910 19:01:50.914041    5464 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0910 19:01:50.948237    5464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 19:01:50.975833    5464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 19:01:50.994335    5464 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 19:01:51.002936    5464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 19:01:51.030164    5464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 19:01:51.059108    5464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 19:01:51.086239    5464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 19:01:51.115021    5464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 19:01:51.142264    5464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 19:01:51.170377    5464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 19:01:51.198789    5464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 19:01:51.229509    5464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 19:01:51.254441    5464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 19:01:51.281574    5464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:01:51.481693    5464 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 19:01:51.510805    5464 start.go:495] detecting cgroup driver to use...
	I0910 19:01:51.519947    5464 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 19:01:51.549113    5464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:01:51.580544    5464 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 19:01:51.619315    5464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:01:51.661431    5464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 19:01:51.697584    5464 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 19:01:51.753819    5464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 19:01:51.779171    5464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 19:01:51.826038    5464 ssh_runner.go:195] Run: which cri-dockerd
	I0910 19:01:51.841572    5464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 19:01:51.859063    5464 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 19:01:51.899802    5464 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 19:01:52.087874    5464 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 19:01:52.273197    5464 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 19:01:52.273197    5464 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 19:01:52.319923    5464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:01:52.512177    5464 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 19:01:55.224889    5464 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7125317s)
	I0910 19:01:55.236232    5464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 19:01:55.269034    5464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 19:01:55.299776    5464 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 19:01:55.496368    5464 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 19:01:55.685718    5464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:01:55.882601    5464 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 19:01:55.920938    5464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 19:01:55.954238    5464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:01:56.152609    5464 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 19:01:56.274289    5464 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 19:01:56.284373    5464 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 19:01:56.296473    5464 start.go:563] Will wait 60s for crictl version
	I0910 19:01:56.305159    5464 ssh_runner.go:195] Run: which crictl
	I0910 19:01:56.322653    5464 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 19:01:56.376892    5464 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0910 19:01:56.388351    5464 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 19:01:56.439667    5464 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 19:01:56.476485    5464 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0910 19:01:56.476627    5464 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0910 19:01:56.482367    5464 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0910 19:01:56.482367    5464 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0910 19:01:56.482367    5464 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0910 19:01:56.482367    5464 ip.go:211] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:36:e6 Flags:up|broadcast|multicast|running}
	I0910 19:01:56.485380    5464 ip.go:214] interface addr: fe80::bc85:cd58:cadb:2533/64
	I0910 19:01:56.485380    5464 ip.go:214] interface addr: 172.31.208.1/20
	I0910 19:01:56.493364    5464 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0910 19:01:56.499918    5464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:01:56.525271    5464 mustload.go:65] Loading cluster: ha-301400
	I0910 19:01:56.525853    5464 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:01:56.526395    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 19:01:58.455391    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:01:58.456415    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:01:58.456415    5464 host.go:66] Checking if "ha-301400" exists ...
	I0910 19:01:58.457164    5464 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400 for IP: 172.31.210.72
	I0910 19:01:58.457164    5464 certs.go:194] generating shared ca certs ...
	I0910 19:01:58.457689    5464 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:01:58.457757    5464 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0910 19:01:58.458494    5464 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0910 19:01:58.458651    5464 certs.go:256] generating profile certs ...
	I0910 19:01:58.459284    5464 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\client.key
	I0910 19:01:58.459355    5464 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.33a82c3b
	I0910 19:01:58.459587    5464 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.33a82c3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.216.168 172.31.210.72 172.31.217.146 172.31.223.254]
	I0910 19:01:58.721105    5464 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.33a82c3b ...
	I0910 19:01:58.721105    5464 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.33a82c3b: {Name:mkb26e86fe1205d999927a8cf6b690a3007cd1ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:01:58.721547    5464 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.33a82c3b ...
	I0910 19:01:58.721547    5464 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.33a82c3b: {Name:mk58204b8e1dd1a661f19f5061ec3f53f960aacd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:01:58.723939    5464 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.33a82c3b -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt
	I0910 19:01:58.741051    5464 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.33a82c3b -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key
	I0910 19:01:58.741771    5464 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key
	I0910 19:01:58.741771    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 19:01:58.741771    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0910 19:01:58.741771    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 19:01:58.742297    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 19:01:58.742391    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 19:01:58.742391    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 19:01:58.743216    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 19:01:58.743369    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 19:01:58.743821    5464 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem (1338 bytes)
	W0910 19:01:58.744104    5464 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724_empty.pem, impossibly tiny 0 bytes
	I0910 19:01:58.744170    5464 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0910 19:01:58.744170    5464 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0910 19:01:58.744170    5464 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0910 19:01:58.744784    5464 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0910 19:01:58.745104    5464 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem (1708 bytes)
	I0910 19:01:58.745296    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /usr/share/ca-certificates/47242.pem
	I0910 19:01:58.745296    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:01:58.745296    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem -> /usr/share/ca-certificates/4724.pem
	I0910 19:01:58.745296    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 19:02:00.688687    5464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:02:00.688687    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:02:00.688687    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 19:02:03.045741    5464 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 19:02:03.046176    5464 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:02:03.046572    5464 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 19:02:03.143062    5464 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0910 19:02:03.150824    5464 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0910 19:02:03.179880    5464 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0910 19:02:03.185751    5464 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0910 19:02:03.212864    5464 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0910 19:02:03.220419    5464 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0910 19:02:03.249264    5464 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0910 19:02:03.256380    5464 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0910 19:02:03.287121    5464 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0910 19:02:03.293664    5464 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0910 19:02:03.323250    5464 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0910 19:02:03.330365    5464 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0910 19:02:03.352559    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 19:02:03.402857    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0910 19:02:03.447296    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 19:02:03.492201    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 19:02:03.536743    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0910 19:02:03.585483    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 19:02:03.631378    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 19:02:03.684302    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 19:02:03.729730    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /usr/share/ca-certificates/47242.pem (1708 bytes)
	I0910 19:02:03.773937    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 19:02:03.815905    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem --> /usr/share/ca-certificates/4724.pem (1338 bytes)
	I0910 19:02:03.860247    5464 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0910 19:02:03.903738    5464 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0910 19:02:03.935631    5464 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0910 19:02:03.966496    5464 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0910 19:02:04.004163    5464 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0910 19:02:04.037800    5464 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0910 19:02:04.068511    5464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0910 19:02:04.109047    5464 ssh_runner.go:195] Run: openssl version
	I0910 19:02:04.126570    5464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47242.pem && ln -fs /usr/share/ca-certificates/47242.pem /etc/ssl/certs/47242.pem"
	I0910 19:02:04.153524    5464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47242.pem
	I0910 19:02:04.160845    5464 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 19:02:04.169231    5464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47242.pem
	I0910 19:02:04.188495    5464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47242.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 19:02:04.217943    5464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 19:02:04.248833    5464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:02:04.255564    5464 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:02:04.264526    5464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:02:04.281818    5464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 19:02:04.309846    5464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4724.pem && ln -fs /usr/share/ca-certificates/4724.pem /etc/ssl/certs/4724.pem"
	I0910 19:02:04.337268    5464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4724.pem
	I0910 19:02:04.344847    5464 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 19:02:04.353611    5464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4724.pem
	I0910 19:02:04.369670    5464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4724.pem /etc/ssl/certs/51391683.0"
	I0910 19:02:04.401440    5464 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 19:02:04.419260    5464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 19:02:04.437043    5464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 19:02:04.454028    5464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 19:02:04.473587    5464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 19:02:04.491984    5464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 19:02:04.509967    5464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 19:02:04.519711    5464 kubeadm.go:934] updating node {m02 172.31.210.72 8443 v1.31.0 docker true true} ...
	I0910 19:02:04.519711    5464 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-301400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.210.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-301400 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 19:02:04.519711    5464 kube-vip.go:115] generating kube-vip config ...
	I0910 19:02:04.528576    5464 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0910 19:02:04.554746    5464 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0910 19:02:04.555003    5464 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.31.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0910 19:02:04.565709    5464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 19:02:04.581754    5464 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 19:02:04.591144    5464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0910 19:02:04.612396    5464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0910 19:02:04.643932    5464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 19:02:04.674081    5464 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0910 19:02:04.717640    5464 ssh_runner.go:195] Run: grep 172.31.223.254	control-plane.minikube.internal$ /etc/hosts
	I0910 19:02:04.723705    5464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:02:04.752374    5464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:02:04.950182    5464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:02:04.978010    5464 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.31.210.72 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 19:02:04.978064    5464 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 19:02:04.979050    5464 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:02:04.981337    5464 out.go:177] * Verifying Kubernetes components...
	I0910 19:02:04.984406    5464 out.go:177] * Enabled addons: 
	I0910 19:02:04.988681    5464 addons.go:510] duration metric: took 10.67ms for enable addons: enabled=[]
	I0910 19:02:04.994335    5464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:02:05.203250    5464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:02:05.233879    5464 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 19:02:05.234300    5464 kapi.go:59] client config for ha-301400: &rest.Config{Host:"https://172.31.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-301400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-301400\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0910 19:02:05.234300    5464 kubeadm.go:483] Overriding stale ClientConfig host https://172.31.223.254:8443 with https://172.31.216.168:8443
	I0910 19:02:05.236434    5464 cert_rotation.go:140] Starting client certificate rotation controller
	I0910 19:02:05.236720    5464 node_ready.go:35] waiting up to 6m0s for node "ha-301400-m02" to be "Ready" ...
	I0910 19:02:05.237019    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:05.237019    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:05.237019    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:05.237019    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:05.250139    5464 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0910 19:02:05.750766    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:05.750766    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:05.750766    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:05.750766    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:05.755599    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:05.756360    5464 node_ready.go:49] node "ha-301400-m02" has status "Ready":"True"
	I0910 19:02:05.756360    5464 node_ready.go:38] duration metric: took 519.493ms for node "ha-301400-m02" to be "Ready" ...
	I0910 19:02:05.756360    5464 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:02:05.756501    5464 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0910 19:02:05.756574    5464 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0910 19:02:05.756658    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
	I0910 19:02:05.756658    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:05.756658    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:05.756658    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:05.764184    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:02:05.776127    5464 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fsbwc" in "kube-system" namespace to be "Ready" ...
	I0910 19:02:05.777063    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-fsbwc
	I0910 19:02:05.777063    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:05.777063    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:05.777063    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:05.780930    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:05.781823    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 19:02:05.781823    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:05.781888    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:05.781888    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:05.788020    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:02:05.788020    5464 pod_ready.go:93] pod "coredns-6f6b679f8f-fsbwc" in "kube-system" namespace has status "Ready":"True"
	I0910 19:02:05.788020    5464 pod_ready.go:82] duration metric: took 11.8928ms for pod "coredns-6f6b679f8f-fsbwc" in "kube-system" namespace to be "Ready" ...
	I0910 19:02:05.788020    5464 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ntqxc" in "kube-system" namespace to be "Ready" ...
	I0910 19:02:05.788020    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ntqxc
	I0910 19:02:05.788020    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:05.788020    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:05.788020    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:05.793000    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:05.793000    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 19:02:05.793000    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:05.793000    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:05.793000    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:05.798002    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:05.798002    5464 pod_ready.go:93] pod "coredns-6f6b679f8f-ntqxc" in "kube-system" namespace has status "Ready":"True"
	I0910 19:02:05.798002    5464 pod_ready.go:82] duration metric: took 9.9809ms for pod "coredns-6f6b679f8f-ntqxc" in "kube-system" namespace to be "Ready" ...
	I0910 19:02:05.798002    5464 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 19:02:05.798987    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400
	I0910 19:02:05.798987    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:05.798987    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:05.798987    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:05.802003    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:05.803328    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 19:02:05.803328    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:05.803385    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:05.803385    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:05.806013    5464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:02:05.807269    5464 pod_ready.go:93] pod "etcd-ha-301400" in "kube-system" namespace has status "Ready":"True"
	I0910 19:02:05.807335    5464 pod_ready.go:82] duration metric: took 9.3325ms for pod "etcd-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 19:02:05.807335    5464 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 19:02:05.807399    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:05.807462    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:05.807462    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:05.807462    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:05.810007    5464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:02:05.811002    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:05.811002    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:05.811002    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:05.811002    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:05.814007    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:06.320907    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:06.320907    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:06.321229    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:06.321229    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:06.325894    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:06.326630    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:06.326630    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:06.326630    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:06.326630    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:06.331220    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:06.811485    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:06.811485    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:06.811550    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:06.811550    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:06.839433    5464 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0910 19:02:06.840831    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:06.840831    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:06.840831    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:06.840831    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:06.869432    5464 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0910 19:02:07.315658    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:07.315658    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:07.315658    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:07.315826    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:07.321218    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:07.322185    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:07.322185    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:07.322185    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:07.322185    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:07.326802    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:07.822004    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:07.822004    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:07.822004    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:07.822004    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:07.829573    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:02:07.830462    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:07.830566    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:07.830566    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:07.830566    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:07.835571    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:07.836517    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:08.313465    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:08.313465    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:08.313465    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:08.313465    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:08.324475    5464 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0910 19:02:08.326462    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:08.326462    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:08.326535    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:08.326535    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:08.333216    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:02:08.820004    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:08.820004    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:08.820004    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:08.820004    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:08.825993    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:08.827406    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:08.827406    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:08.827474    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:08.827584    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:08.832262    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:09.311454    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:09.311485    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:09.311536    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:09.311536    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:09.319375    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:02:09.320029    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:09.320029    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:09.320029    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:09.320029    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:09.323597    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:09.816789    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:09.816789    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:09.816789    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:09.816789    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:09.821926    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:09.822677    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:09.822677    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:09.822677    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:09.822677    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:09.828554    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:10.309869    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:10.309954    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:10.309954    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:10.309954    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:10.314486    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:10.316179    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:10.316252    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:10.316252    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:10.316252    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:10.320478    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:10.321784    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:10.818614    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:10.818848    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:10.818848    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:10.818848    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:10.823248    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:10.825022    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:10.825086    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:10.825086    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:10.825086    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:10.829753    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:11.314665    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:11.314738    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:11.314738    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:11.314738    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:11.322404    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:02:11.323654    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:11.323700    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:11.323700    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:11.323700    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:11.328563    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:11.817291    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:11.817291    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:11.817291    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:11.817291    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:11.821891    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:11.822989    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:11.822989    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:11.822989    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:11.822989    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:11.827956    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:12.317834    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:12.317834    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:12.317834    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:12.317834    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:12.322820    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:12.324673    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:12.324673    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:12.324673    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:12.324775    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:12.328926    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:12.330339    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:12.819541    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:12.819541    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:12.819626    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:12.819626    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:12.830798    5464 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0910 19:02:12.832554    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:12.832554    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:12.832554    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:12.832649    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:12.835989    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:13.320940    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:13.320940    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:13.320940    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:13.320940    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:13.325517    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:13.326549    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:13.326549    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:13.326549    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:13.326549    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:13.330796    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:13.820567    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:13.821123    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:13.821123    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:13.821123    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:13.828733    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:02:13.829500    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:13.829500    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:13.829500    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:13.829500    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:13.833619    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:14.309847    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:14.309847    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:14.309847    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:14.309937    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:14.314093    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:14.316018    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:14.316018    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:14.316080    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:14.316080    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:14.322945    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:02:14.810363    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:14.810428    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:14.810428    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:14.810428    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:14.815080    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:14.816801    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:14.816900    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:14.816900    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:14.816900    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:14.821257    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:14.821768    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:15.313088    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:15.313160    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:15.313160    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:15.313234    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:15.317597    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:15.319726    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:15.319726    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:15.319726    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:15.319726    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:15.325929    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:02:15.815085    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:15.815159    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:15.815159    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:15.815159    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:15.819375    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:15.820818    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:15.820919    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:15.820919    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:15.820919    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:15.825120    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:16.314633    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:16.314633    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:16.314633    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:16.314633    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:16.320683    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:02:16.323837    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:16.323917    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:16.323917    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:16.324003    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:16.328679    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:16.812763    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:16.812954    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:16.812954    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:16.812954    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:16.825325    5464 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0910 19:02:16.826318    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:16.826318    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:16.826318    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:16.826318    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:16.831168    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:16.831892    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:17.310487    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:17.310577    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:17.310577    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:17.310577    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:17.314857    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:17.316359    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:17.316359    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:17.316438    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:17.316438    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:17.322338    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:17.808541    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:17.808630    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:17.808630    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:17.808630    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:17.818760    5464 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0910 19:02:17.819763    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:17.819763    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:17.819763    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:17.819763    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:17.824891    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:18.323228    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:18.323228    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:18.323303    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:18.323303    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:18.328092    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:18.329856    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:18.329856    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:18.329856    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:18.329856    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:18.334274    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:18.810331    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:18.810331    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:18.810405    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:18.810405    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:18.815209    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:18.816730    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:18.816803    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:18.816803    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:18.816803    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:18.820702    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:19.309784    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:19.309920    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:19.309920    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:19.309920    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:19.314803    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:19.316593    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:19.316593    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:19.316682    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:19.316682    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:19.321222    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:19.321869    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:19.811721    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:19.811721    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:19.811721    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:19.811721    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:19.817237    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:19.818233    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:19.818306    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:19.818306    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:19.818306    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:19.821451    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:20.313224    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:20.313224    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:20.313317    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:20.313317    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:20.318543    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:20.319499    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:20.319499    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:20.319499    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:20.319499    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:20.322674    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:20.812749    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:20.812861    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:20.812861    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:20.812861    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:20.819006    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:02:20.819940    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:20.819940    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:20.819940    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:20.819940    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:20.827087    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:02:21.314353    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:21.314353    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:21.314353    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:21.314353    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:21.318635    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:21.320983    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:21.320983    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:21.321078    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:21.321078    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:21.326431    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:21.327440    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:21.818102    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:21.818102    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:21.818102    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:21.818102    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:21.833268    5464 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0910 19:02:21.834956    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:21.834956    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:21.834956    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:21.834956    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:21.859079    5464 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0910 19:02:22.320601    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:22.320601    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:22.320601    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:22.320601    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:22.327968    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:02:22.328713    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:22.328713    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:22.328713    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:22.328713    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:22.333444    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:22.818291    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:22.818291    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:22.818291    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:22.818291    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:22.824361    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:22.825680    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:22.825680    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:22.825680    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:22.825680    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:22.828977    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:23.316518    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:23.316518    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:23.316518    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:23.316518    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:23.323113    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:02:23.324826    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:23.324826    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:23.324826    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:23.324826    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:23.335007    5464 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0910 19:02:23.335007    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:23.815511    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:23.815511    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:23.815511    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:23.815511    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:23.823325    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:02:23.824400    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:23.824464    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:23.824520    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:23.824580    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:23.829740    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:24.320407    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:24.320407    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:24.320407    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:24.320407    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:24.325778    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:24.326952    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:24.327057    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:24.327057    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:24.327057    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:24.332622    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:24.822531    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:24.822531    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:24.822531    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:24.822531    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:24.830827    5464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 19:02:24.832582    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:24.832582    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:24.832582    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:24.832582    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:24.837010    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:25.324546    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:25.324640    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:25.324640    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:25.324640    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:25.329774    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:25.330766    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:25.330857    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:25.330857    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:25.330857    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:25.335021    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:25.335750    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:25.823952    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:25.823952    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:25.824022    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:25.824022    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:25.828662    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:25.830385    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:25.830385    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:25.830385    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:25.830385    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:25.833557    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:26.310243    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:26.310243    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:26.310243    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:26.310243    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:26.316241    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:26.317650    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:26.317650    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:26.317650    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:26.317650    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:26.322367    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:26.810965    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:26.811053    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:26.811053    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:26.811053    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:26.816970    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:26.818660    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:26.818660    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:26.818719    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:26.818719    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:26.823443    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:27.311784    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:27.311784    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:27.311784    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:27.311784    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:27.316969    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:27.317344    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:27.317344    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:27.317344    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:27.317344    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:27.321595    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:27.815053    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:27.815153    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:27.815153    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:27.815153    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:27.822824    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:02:27.822824    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:27.823862    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:27.823862    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:27.823862    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:27.840821    5464 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0910 19:02:27.841510    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:28.315315    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:28.315315    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:28.315315    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:28.315402    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:28.322963    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:02:28.324607    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:28.324782    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:28.324782    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:28.324782    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:28.329073    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:28.817483    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:28.817483    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:28.817483    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:28.817483    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:28.822880    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:28.823902    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:28.823902    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:28.823902    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:28.823902    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:28.829820    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:29.317801    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:29.317892    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:29.317968    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:29.317968    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:29.322334    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:29.324978    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:29.324978    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:29.325038    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:29.325038    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:29.330014    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:29.817673    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:29.817885    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:29.817885    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:29.817885    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:29.832250    5464 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0910 19:02:29.833610    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:29.833610    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:29.833610    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:29.833610    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:29.847021    5464 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0910 19:02:29.847955    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:30.318509    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:30.318509    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:30.318509    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:30.318509    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:30.323106    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:30.325137    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:30.325137    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:30.325137    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:30.325137    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:30.333706    5464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 19:02:30.820843    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:30.820843    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:30.820843    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:30.820843    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:30.826425    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:30.828194    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:30.828278    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:30.828278    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:30.828278    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:30.836430    5464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 19:02:31.321550    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:31.321609    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:31.321609    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:31.321609    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:31.326406    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:31.327151    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:31.327235    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:31.327235    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:31.327235    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:31.334088    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:02:31.821103    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:31.821390    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:31.821390    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:31.821390    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:31.829293    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:02:31.830625    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:31.830717    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:31.830717    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:31.830717    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:31.834903    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:32.323277    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:32.323277    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:32.323277    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:32.323277    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:32.328376    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:32.329823    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:32.329894    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:32.329894    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:32.329894    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:32.333156    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:32.334934    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:32.822217    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:32.822293    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:32.822293    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:32.822293    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:32.828035    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:32.829575    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:32.829575    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:32.829575    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:32.829686    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:32.833860    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:33.322986    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:33.322986    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:33.322986    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:33.322986    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:33.328201    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:33.328984    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:33.329049    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:33.329049    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:33.329049    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:33.335806    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:02:33.822631    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:33.822738    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:33.822738    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:33.822738    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:33.830089    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:02:33.831000    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:33.831000    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:33.831152    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:33.831152    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:33.835236    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:34.321734    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:34.321986    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:34.321986    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:34.321986    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:34.326383    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:34.327866    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:34.327866    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:34.327866    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:34.327866    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:34.334124    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:02:34.821621    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:34.821858    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:34.821858    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:34.821858    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:34.826648    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:34.828727    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:34.828727    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:34.828783    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:34.828783    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:34.833694    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:34.833931    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:35.323887    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:35.324139    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:35.324139    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:35.324139    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:35.333064    5464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 19:02:35.333964    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:35.333964    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:35.333964    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:35.333964    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:35.337552    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:35.812155    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:35.812155    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:35.812155    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:35.812155    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:35.819278    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:02:35.821223    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:35.821223    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:35.821223    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:35.821223    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:35.826537    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:36.313147    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:36.313147    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:36.313147    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:36.313147    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:36.319728    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:02:36.321415    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:36.321526    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:36.321526    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:36.321526    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:36.329736    5464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 19:02:36.813489    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:36.813588    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:36.813588    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:36.813588    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:36.821983    5464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 19:02:36.823639    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:36.823695    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:36.823695    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:36.823695    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:36.828777    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:37.314263    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:37.314494    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:37.314494    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:37.314494    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:37.319310    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:37.320995    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:37.321086    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:37.321086    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:37.321086    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:37.326291    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:37.327060    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:37.817583    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:37.817583    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:37.817583    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:37.817583    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:37.823024    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:37.824454    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:37.824527    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:37.824527    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:37.824527    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:37.830997    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:02:38.316796    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:38.317028    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:38.317028    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:38.317120    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:38.322723    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:38.323689    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:38.323689    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:38.323689    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:38.323689    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:38.327266    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:38.820626    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:38.820716    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:38.820716    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:38.820716    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:38.825935    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:38.827917    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:38.827992    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:38.827992    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:38.827992    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:38.832659    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:39.320669    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:39.320669    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:39.320820    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:39.320820    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:39.325562    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:39.327373    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:39.327430    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:39.327430    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:39.327430    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:39.331038    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:39.332705    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:39.821168    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:39.821168    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:39.821257    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:39.821257    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:39.826662    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:39.827701    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:39.827701    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:39.827701    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:39.827701    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:39.835224    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:02:40.317616    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:40.317669    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:40.317669    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:40.317702    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:40.326134    5464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 19:02:40.327392    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:40.327448    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:40.327448    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:40.327448    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:40.330868    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:40.816632    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:40.816632    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:40.816632    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:40.816632    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:40.822175    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:40.822829    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:40.822829    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:40.822829    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:40.822829    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:40.831299    5464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 19:02:41.317218    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:41.317218    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:41.317218    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:41.317218    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:41.321493    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:41.322157    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:41.322764    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:41.322764    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:41.322764    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:41.326806    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:41.817845    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:41.818122    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:41.818122    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:41.818122    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:41.822533    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:41.824555    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:41.824614    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:41.824614    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:41.824614    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:41.829106    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:41.829876    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:42.318494    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:42.318874    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:42.318874    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:42.318963    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:42.323606    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:42.324973    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:42.324973    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:42.324973    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:42.324973    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:42.329228    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:42.820475    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:42.820573    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:42.820573    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:42.820573    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:42.826034    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:42.827217    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:42.827217    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:42.827217    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:42.827217    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:42.834209    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:02:43.319946    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:43.320190    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:43.320190    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:43.320190    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:43.330159    5464 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0910 19:02:43.331351    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:43.331351    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:43.331351    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:43.331351    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:43.334267    5464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:02:43.820063    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:43.820263    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:43.820263    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:43.820263    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:43.826495    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:02:43.827302    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:43.827302    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:43.827302    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:43.827302    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:43.831458    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:43.832506    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:44.317988    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:44.318064    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:44.318064    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:44.318132    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:44.325823    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:02:44.327539    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:44.327597    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:44.327597    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:44.327597    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:44.331580    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:44.821539    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:44.821539    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:44.821647    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:44.821647    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:44.829834    5464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 19:02:44.830754    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:44.830754    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:44.830754    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:44.830754    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:44.840505    5464 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0910 19:02:45.319343    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:45.319617    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:45.319617    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:45.319710    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:45.324192    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:45.325546    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:45.325546    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:45.325546    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:45.325546    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:45.329151    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:45.825538    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:45.825646    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:45.825732    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:45.825732    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:45.832599    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:02:45.833602    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:45.833602    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:45.833602    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:45.833602    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:45.838599    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:45.838599    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:46.320983    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:46.320983    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:46.321050    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:46.321050    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:46.327808    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:02:46.328697    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:46.328697    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:46.328697    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:46.328697    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:46.332927    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:46.816699    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:46.816733    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:46.816783    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:46.816783    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:46.822426    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:46.824224    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:46.824224    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:46.824224    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:46.824224    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:46.828415    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:47.322681    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:47.322681    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:47.322681    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:47.322681    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:47.328722    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:02:47.330515    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:47.330515    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:47.330622    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:47.330622    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:47.334864    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:47.820628    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:47.820628    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:47.820887    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:47.820887    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:47.825207    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:47.827616    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:47.827616    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:47.827616    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:47.827616    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:47.831990    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:48.320651    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:48.320946    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:48.320946    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:48.320946    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:48.326290    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:48.327599    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:48.327699    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:48.327699    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:48.327699    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:48.330868    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:48.332023    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:48.822296    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:48.822395    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:48.822395    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:48.822395    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:48.827355    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:48.828553    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:48.828553    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:48.828553    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:48.828640    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:48.834293    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:49.322999    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:49.322999    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:49.322999    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:49.322999    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:49.328299    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:49.329573    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:49.329630    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:49.329630    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:49.329630    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:49.334275    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:49.826617    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:49.826731    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:49.826731    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:49.826731    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:49.833906    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:02:49.834869    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:49.834869    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:49.834869    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:49.834869    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:49.838888    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:50.318879    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:50.318879    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:50.318879    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:50.318879    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:50.325886    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:02:50.327573    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:50.327573    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:50.327638    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:50.327638    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:50.331531    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:50.818973    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:50.818973    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:50.818973    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:50.818973    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:50.825750    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:02:50.826771    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:50.826771    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:50.826771    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:50.826771    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:50.830372    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:50.832003    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:51.321501    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:51.321567    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:51.321567    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:51.321567    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:51.326120    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:51.326120    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:51.326120    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:51.326120    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:51.326120    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:51.331766    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:51.823705    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:51.823974    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:51.823974    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:51.823974    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:51.829348    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:51.830969    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:51.830969    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:51.830969    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:51.831042    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:51.839180    5464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 19:02:52.311995    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:52.312055    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:52.312107    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:52.312107    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:52.317404    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:52.318404    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:52.318404    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:52.318404    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:52.318404    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:52.322481    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:52.813069    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:52.813158    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:52.813158    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:52.813158    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:52.820264    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:02:52.821001    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:52.821001    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:52.821001    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:52.821001    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:52.825590    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:53.313441    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:53.313687    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:53.313687    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:53.313687    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:53.321232    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:02:53.321232    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:53.321232    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:53.321232    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:53.321232    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:53.326126    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:53.327371    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:53.817484    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:53.817484    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:53.817484    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:53.817484    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:53.821791    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:53.823457    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:53.823457    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:53.823457    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:53.823457    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:53.827641    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:54.319828    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:54.320279    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:54.320279    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:54.320279    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:54.325331    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:54.327071    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:54.327137    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:54.327137    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:54.327137    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:54.331871    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:54.825108    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:54.825353    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:54.825353    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:54.825353    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:54.836115    5464 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0910 19:02:54.836115    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:54.836115    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:54.836115    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:54.836115    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:54.841805    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:55.314309    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:55.314309    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:55.314309    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:55.314309    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:55.318645    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:55.319841    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:55.319841    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:55.319841    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:55.319841    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:55.323256    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:55.813275    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:55.813275    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:55.813275    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:55.813344    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:55.819891    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:02:55.820845    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:55.820845    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:55.820845    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:55.820845    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:55.828511    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:02:55.829579    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:56.326070    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:56.326156    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:56.326156    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:56.326156    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:56.339231    5464 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0910 19:02:56.340283    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:56.340350    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:56.340350    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:56.340350    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:56.344287    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:56.825489    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:56.825607    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:56.825607    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:56.825607    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:56.830921    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:56.832089    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:56.832089    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:56.832089    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:56.832089    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:56.835367    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:02:57.321506    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:57.321576    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:57.321646    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:57.321646    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:57.326987    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:57.328074    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:57.328174    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:57.328174    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:57.328174    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:57.332342    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:57.824581    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 19:02:57.824581    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:57.824647    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:57.824647    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:57.829213    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:02:57.830541    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 19:02:57.830603    5464 round_trippers.go:469] Request Headers:
	I0910 19:02:57.830603    5464 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:02:57.830603    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:02:57.836352    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:02:57.837347    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"

                                                
                                                
** /stderr **
ha_test.go:422: I0910 19:00:05.355857    5464 out.go:345] Setting OutFile to fd 760 ...
I0910 19:00:05.426038    5464 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 19:00:05.426038    5464 out.go:358] Setting ErrFile to fd 848...
I0910 19:00:05.426038    5464 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 19:00:05.439397    5464 mustload.go:65] Loading cluster: ha-301400
I0910 19:00:05.439985    5464 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 19:00:05.441437    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:00:07.407907    5464 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0910 19:00:07.407907    5464 main.go:141] libmachine: [stderr =====>] : 
W0910 19:00:07.408012    5464 host.go:58] "ha-301400-m02" host status: Stopped
I0910 19:00:07.411310    5464 out.go:177] * Starting "ha-301400-m02" control-plane node in "ha-301400" cluster
I0910 19:00:07.413648    5464 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0910 19:00:07.413648    5464 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
I0910 19:00:07.413648    5464 cache.go:56] Caching tarball of preloaded images
I0910 19:00:07.414293    5464 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0910 19:00:07.414293    5464 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0910 19:00:07.414293    5464 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json ...
I0910 19:00:07.416128    5464 start.go:360] acquireMachinesLock for ha-301400-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0910 19:00:07.416128    5464 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-301400-m02"
I0910 19:00:07.416803    5464 start.go:96] Skipping create...Using existing machine configuration
I0910 19:00:07.416862    5464 fix.go:54] fixHost starting: m02
I0910 19:00:07.416931    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:00:09.336310    5464 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0910 19:00:09.336310    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:09.336310    5464 fix.go:112] recreateIfNeeded on ha-301400-m02: state=Stopped err=<nil>
W0910 19:00:09.336389    5464 fix.go:138] unexpected machine state, will restart: <nil>
I0910 19:00:09.339124    5464 out.go:177] * Restarting existing hyperv VM for "ha-301400-m02" ...
I0910 19:00:09.341294    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-301400-m02
I0910 19:00:12.197608    5464 main.go:141] libmachine: [stdout =====>] : 
I0910 19:00:12.197608    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:12.198050    5464 main.go:141] libmachine: Waiting for host to start...
I0910 19:00:12.198050    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:00:14.277722    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:00:14.277722    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:14.277722    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:00:16.564616    5464 main.go:141] libmachine: [stdout =====>] : 
I0910 19:00:16.564616    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:17.579729    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:00:19.565446    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:00:19.565446    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:19.565674    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:00:21.820737    5464 main.go:141] libmachine: [stdout =====>] : 
I0910 19:00:21.820785    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:22.828186    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:00:24.824253    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:00:24.824253    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:24.825037    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:00:27.143629    5464 main.go:141] libmachine: [stdout =====>] : 
I0910 19:00:27.144647    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:28.150821    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:00:30.198105    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:00:30.198285    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:30.198285    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:00:32.455037    5464 main.go:141] libmachine: [stdout =====>] : 
I0910 19:00:32.455037    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:33.467497    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:00:35.491825    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:00:35.492133    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:35.492133    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:00:37.820766    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72

                                                
                                                
I0910 19:00:37.821173    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:37.823958    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:00:39.784025    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:00:39.784025    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:39.784025    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:00:42.098623    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72

                                                
                                                
I0910 19:00:42.098853    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:42.099226    5464 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json ...
I0910 19:00:42.101924    5464 machine.go:93] provisionDockerMachine start ...
I0910 19:00:42.101924    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:00:44.078010    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:00:44.078010    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:44.078117    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:00:46.377662    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72

                                                
                                                
I0910 19:00:46.377662    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:46.382497    5464 main.go:141] libmachine: Using SSH client type: native
I0910 19:00:46.382497    5464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.72 22 <nil> <nil>}
I0910 19:00:46.383024    5464 main.go:141] libmachine: About to run SSH command:
hostname
I0910 19:00:46.507579    5464 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0910 19:00:46.507638    5464 buildroot.go:166] provisioning hostname "ha-301400-m02"
I0910 19:00:46.507700    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:00:48.452847    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:00:48.452847    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:48.452938    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:00:50.777497    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72

                                                
                                                
I0910 19:00:50.777497    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:50.781822    5464 main.go:141] libmachine: Using SSH client type: native
I0910 19:00:50.782260    5464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.72 22 <nil> <nil>}
I0910 19:00:50.782324    5464 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-301400-m02 && echo "ha-301400-m02" | sudo tee /etc/hostname
I0910 19:00:50.951092    5464 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-301400-m02

                                                
                                                
I0910 19:00:50.952082    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:00:52.902296    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:00:52.902296    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:52.902296    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:00:55.196606    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72

                                                
                                                
I0910 19:00:55.197515    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:55.201669    5464 main.go:141] libmachine: Using SSH client type: native
I0910 19:00:55.202559    5464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.72 22 <nil> <nil>}
I0910 19:00:55.202559    5464 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-301400-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-301400-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-301400-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I0910 19:00:55.344792    5464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0910 19:00:55.344792    5464 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
I0910 19:00:55.344792    5464 buildroot.go:174] setting up certificates
I0910 19:00:55.344792    5464 provision.go:84] configureAuth start
I0910 19:00:55.344792    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:00:57.288132    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:00:57.288430    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:57.288430    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:00:59.616554    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72

                                                
                                                
I0910 19:00:59.616554    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:00:59.616554    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:01:01.591443    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:01:01.591443    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:01.591525    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:01:03.890969    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72

                                                
                                                
I0910 19:01:03.891046    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:03.891046    5464 provision.go:143] copyHostCerts
I0910 19:01:03.891388    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
I0910 19:01:03.891568    5464 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
I0910 19:01:03.891640    5464 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
I0910 19:01:03.892008    5464 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
I0910 19:01:03.893006    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
I0910 19:01:03.893303    5464 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
I0910 19:01:03.893549    5464 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
I0910 19:01:03.893636    5464 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
I0910 19:01:03.895034    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
I0910 19:01:03.895088    5464 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
I0910 19:01:03.895088    5464 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
I0910 19:01:03.895088    5464 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
I0910 19:01:03.895778    5464 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-301400-m02 san=[127.0.0.1 172.31.210.72 ha-301400-m02 localhost minikube]
I0910 19:01:04.203305    5464 provision.go:177] copyRemoteCerts
I0910 19:01:04.211313    5464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0910 19:01:04.211313    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:01:06.151234    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:01:06.152248    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:06.152365    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:01:08.494914    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72

                                                
                                                
I0910 19:01:08.494996    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:08.494996    5464 sshutil.go:53] new ssh client: &{IP:172.31.210.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\id_rsa Username:docker}
I0910 19:01:08.600762    5464 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3890968s)
I0910 19:01:08.600828    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
I0910 19:01:08.601406    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0910 19:01:08.650188    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
I0910 19:01:08.650567    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0910 19:01:08.696186    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
I0910 19:01:08.696186    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
I0910 19:01:08.741360    5464 provision.go:87] duration metric: took 13.395671s to configureAuth
I0910 19:01:08.741360    5464 buildroot.go:189] setting minikube options for container-runtime
I0910 19:01:08.742203    5464 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 19:01:08.742269    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:01:10.700318    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:01:10.700318    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:10.700318    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:01:13.023927    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72

                                                
                                                
I0910 19:01:13.024525    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:13.029280    5464 main.go:141] libmachine: Using SSH client type: native
I0910 19:01:13.029933    5464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.72 22 <nil> <nil>}
I0910 19:01:13.029933    5464 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0910 19:01:13.157134    5464 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0910 19:01:13.157235    5464 buildroot.go:70] root file system type: tmpfs
I0910 19:01:13.157467    5464 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0910 19:01:13.157554    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:01:15.096499    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:01:15.096499    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:15.096499    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:01:17.419841    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72

                                                
                                                
I0910 19:01:17.420855    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:17.427549    5464 main.go:141] libmachine: Using SSH client type: native
I0910 19:01:17.427549    5464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.72 22 <nil> <nil>}
I0910 19:01:17.428210    5464 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0910 19:01:17.584753    5464 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0910 19:01:17.584880    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:01:19.567190    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:01:19.567416    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:19.567416    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:01:21.905923    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72

                                                
                                                
I0910 19:01:21.905923    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:21.910628    5464 main.go:141] libmachine: Using SSH client type: native
I0910 19:01:21.910700    5464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.72 22 <nil> <nil>}
I0910 19:01:21.910700    5464 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0910 19:01:24.383155    5464 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I0910 19:01:24.383155    5464 machine.go:96] duration metric: took 42.2784002s to provisionDockerMachine
I0910 19:01:24.383155    5464 start.go:293] postStartSetup for "ha-301400-m02" (driver="hyperv")
I0910 19:01:24.383155    5464 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0910 19:01:24.393142    5464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0910 19:01:24.393142    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:01:26.338886    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:01:26.338886    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:26.338962    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:01:28.654392    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72

                                                
                                                
I0910 19:01:28.654392    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:28.654392    5464 sshutil.go:53] new ssh client: &{IP:172.31.210.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\id_rsa Username:docker}
I0910 19:01:28.766806    5464 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3733719s)
I0910 19:01:28.777083    5464 ssh_runner.go:195] Run: cat /etc/os-release
I0910 19:01:28.784083    5464 info.go:137] Remote host: Buildroot 2023.02.9
I0910 19:01:28.784192    5464 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
I0910 19:01:28.784698    5464 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
I0910 19:01:28.785890    5464 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> 47242.pem in /etc/ssl/certs
I0910 19:01:28.785890    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /etc/ssl/certs/47242.pem
I0910 19:01:28.796652    5464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0910 19:01:28.814710    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /etc/ssl/certs/47242.pem (1708 bytes)
I0910 19:01:28.866740    5464 start.go:296] duration metric: took 4.4832853s for postStartSetup
I0910 19:01:28.877073    5464 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0910 19:01:28.878062    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:01:30.846023    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:01:30.846023    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:30.846405    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:01:33.193913    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72

                                                
                                                
I0910 19:01:33.194359    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:33.194743    5464 sshutil.go:53] new ssh client: &{IP:172.31.210.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\id_rsa Username:docker}
I0910 19:01:33.296264    5464 ssh_runner.go:235] Completed: sudo ls --almost-all -1 /var/lib/minikube/backup: (4.4188963s)
I0910 19:01:33.296264    5464 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
I0910 19:01:33.307398    5464 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0910 19:01:33.378148    5464 fix.go:56] duration metric: took 1m25.9555265s for fixHost
I0910 19:01:33.378148    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:01:35.278657    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:01:35.278831    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:35.278831    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:01:37.549504    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72

                                                
                                                
I0910 19:01:37.549988    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:37.553452    5464 main.go:141] libmachine: Using SSH client type: native
I0910 19:01:37.554029    5464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.72 22 <nil> <nil>}
I0910 19:01:37.554029    5464 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0910 19:01:37.678498    5464 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994897.899669378

                                                
                                                
I0910 19:01:37.678498    5464 fix.go:216] guest clock: 1725994897.899669378
I0910 19:01:37.678567    5464 fix.go:229] Guest: 2024-09-10 19:01:37.899669378 +0000 UTC Remote: 2024-09-10 19:01:33.3781488 +0000 UTC m=+88.093166601 (delta=4.521520578s)
I0910 19:01:37.678637    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:01:39.641515    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:01:39.641515    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:39.641515    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:01:41.974680    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72

                                                
                                                
I0910 19:01:41.974680    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:41.978823    5464 main.go:141] libmachine: Using SSH client type: native
I0910 19:01:41.978877    5464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.72 22 <nil> <nil>}
I0910 19:01:41.978877    5464 main.go:141] libmachine: About to run SSH command:
sudo date -s @1725994897
I0910 19:01:42.125317    5464 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Sep 10 19:01:37 UTC 2024

                                                
                                                
I0910 19:01:42.125317    5464 fix.go:236] clock set: Tue Sep 10 19:01:37 UTC 2024
(err=<nil>)
I0910 19:01:42.125431    5464 start.go:83] releasing machines lock for "ha-301400-m02", held for 1m34.7024341s
I0910 19:01:42.125546    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:01:44.036153    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:01:44.036153    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:44.036153    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:01:46.352177    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72

                                                
                                                
I0910 19:01:46.352177    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:46.355049    5464 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
I0910 19:01:46.355049    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:01:46.363124    5464 ssh_runner.go:195] Run: systemctl --version
I0910 19:01:46.363124    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
I0910 19:01:48.328226    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:01:48.328226    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:48.328226    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:01:48.359329    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:01:48.359329    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:48.359329    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
I0910 19:01:50.711589    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72

                                                
                                                
I0910 19:01:50.711589    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:50.711589    5464 sshutil.go:53] new ssh client: &{IP:172.31.210.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\id_rsa Username:docker}
I0910 19:01:50.744736    5464 main.go:141] libmachine: [stdout =====>] : 172.31.210.72

                                                
                                                
I0910 19:01:50.744736    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:50.745736    5464 sshutil.go:53] new ssh client: &{IP:172.31.210.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\id_rsa Username:docker}
I0910 19:01:50.796109    5464 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.440763s)
W0910 19:01:50.796109    5464 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
stdout:

                                                
                                                
stderr:
bash: line 1: curl.exe: command not found
I0910 19:01:50.846372    5464 ssh_runner.go:235] Completed: systemctl --version: (4.4829491s)
I0910 19:01:50.854798    5464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0910 19:01:50.863671    5464 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0910 19:01:50.872300    5464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0910 19:01:50.900866    5464 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0910 19:01:50.900866    5464 start.go:495] detecting cgroup driver to use...
I0910 19:01:50.900866    5464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
W0910 19:01:50.914041    5464 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
W0910 19:01:50.914041    5464 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
I0910 19:01:50.948237    5464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0910 19:01:50.975833    5464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0910 19:01:50.994335    5464 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0910 19:01:51.002936    5464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0910 19:01:51.030164    5464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0910 19:01:51.059108    5464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0910 19:01:51.086239    5464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0910 19:01:51.115021    5464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0910 19:01:51.142264    5464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0910 19:01:51.170377    5464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0910 19:01:51.198789    5464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0910 19:01:51.229509    5464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0910 19:01:51.254441    5464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0910 19:01:51.281574    5464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0910 19:01:51.481693    5464 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0910 19:01:51.510805    5464 start.go:495] detecting cgroup driver to use...
I0910 19:01:51.519947    5464 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0910 19:01:51.549113    5464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0910 19:01:51.580544    5464 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0910 19:01:51.619315    5464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0910 19:01:51.661431    5464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0910 19:01:51.697584    5464 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0910 19:01:51.753819    5464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0910 19:01:51.779171    5464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0910 19:01:51.826038    5464 ssh_runner.go:195] Run: which cri-dockerd
I0910 19:01:51.841572    5464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0910 19:01:51.859063    5464 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0910 19:01:51.899802    5464 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0910 19:01:52.087874    5464 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0910 19:01:52.273197    5464 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0910 19:01:52.273197    5464 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0910 19:01:52.319923    5464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0910 19:01:52.512177    5464 ssh_runner.go:195] Run: sudo systemctl restart docker
I0910 19:01:55.224889    5464 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7125317s)
I0910 19:01:55.236232    5464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0910 19:01:55.269034    5464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0910 19:01:55.299776    5464 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0910 19:01:55.496368    5464 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0910 19:01:55.685718    5464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0910 19:01:55.882601    5464 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0910 19:01:55.920938    5464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0910 19:01:55.954238    5464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0910 19:01:56.152609    5464 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0910 19:01:56.274289    5464 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0910 19:01:56.284373    5464 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0910 19:01:56.296473    5464 start.go:563] Will wait 60s for crictl version
I0910 19:01:56.305159    5464 ssh_runner.go:195] Run: which crictl
I0910 19:01:56.322653    5464 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0910 19:01:56.376892    5464 start.go:579] Version:  0.1.0
RuntimeName:  docker
RuntimeVersion:  27.2.0
RuntimeApiVersion:  v1
I0910 19:01:56.388351    5464 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0910 19:01:56.439667    5464 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0910 19:01:56.476485    5464 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
I0910 19:01:56.476627    5464 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
I0910 19:01:56.482367    5464 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
I0910 19:01:56.482367    5464 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
I0910 19:01:56.482367    5464 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
I0910 19:01:56.482367    5464 ip.go:211] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:36:e6 Flags:up|broadcast|multicast|running}
I0910 19:01:56.485380    5464 ip.go:214] interface addr: fe80::bc85:cd58:cadb:2533/64
I0910 19:01:56.485380    5464 ip.go:214] interface addr: 172.31.208.1/20
I0910 19:01:56.493364    5464 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
I0910 19:01:56.499918    5464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0910 19:01:56.525271    5464 mustload.go:65] Loading cluster: ha-301400
I0910 19:01:56.525853    5464 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 19:01:56.526395    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
I0910 19:01:58.455391    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:01:58.456415    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:01:58.456415    5464 host.go:66] Checking if "ha-301400" exists ...
I0910 19:01:58.457164    5464 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400 for IP: 172.31.210.72
I0910 19:01:58.457164    5464 certs.go:194] generating shared ca certs ...
I0910 19:01:58.457689    5464 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0910 19:01:58.457757    5464 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
I0910 19:01:58.458494    5464 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
I0910 19:01:58.458651    5464 certs.go:256] generating profile certs ...
I0910 19:01:58.459284    5464 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\client.key
I0910 19:01:58.459355    5464 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.33a82c3b
I0910 19:01:58.459587    5464 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.33a82c3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.216.168 172.31.210.72 172.31.217.146 172.31.223.254]
I0910 19:01:58.721105    5464 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.33a82c3b ...
I0910 19:01:58.721105    5464 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.33a82c3b: {Name:mkb26e86fe1205d999927a8cf6b690a3007cd1ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0910 19:01:58.721547    5464 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.33a82c3b ...
I0910 19:01:58.721547    5464 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.33a82c3b: {Name:mk58204b8e1dd1a661f19f5061ec3f53f960aacd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0910 19:01:58.723939    5464 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.33a82c3b -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt
I0910 19:01:58.741051    5464 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.33a82c3b -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key
I0910 19:01:58.741771    5464 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key
I0910 19:01:58.741771    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
I0910 19:01:58.741771    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
I0910 19:01:58.741771    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0910 19:01:58.742297    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0910 19:01:58.742391    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0910 19:01:58.742391    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0910 19:01:58.743216    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0910 19:01:58.743369    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0910 19:01:58.743821    5464 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem (1338 bytes)
W0910 19:01:58.744104    5464 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724_empty.pem, impossibly tiny 0 bytes
I0910 19:01:58.744170    5464 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
I0910 19:01:58.744170    5464 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
I0910 19:01:58.744170    5464 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
I0910 19:01:58.744784    5464 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
I0910 19:01:58.745104    5464 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem (1708 bytes)
I0910 19:01:58.745296    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /usr/share/ca-certificates/47242.pem
I0910 19:01:58.745296    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0910 19:01:58.745296    5464 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem -> /usr/share/ca-certificates/4724.pem
I0910 19:01:58.745296    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
I0910 19:02:00.688687    5464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0910 19:02:00.688687    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:02:00.688687    5464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
I0910 19:02:03.045741    5464 main.go:141] libmachine: [stdout =====>] : 172.31.216.168

                                                
                                                
I0910 19:02:03.046176    5464 main.go:141] libmachine: [stderr =====>] : 
I0910 19:02:03.046572    5464 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
I0910 19:02:03.143062    5464 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
I0910 19:02:03.150824    5464 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
I0910 19:02:03.179880    5464 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
I0910 19:02:03.185751    5464 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
I0910 19:02:03.212864    5464 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
I0910 19:02:03.220419    5464 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
I0910 19:02:03.249264    5464 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
I0910 19:02:03.256380    5464 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
I0910 19:02:03.287121    5464 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
I0910 19:02:03.293664    5464 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
I0910 19:02:03.323250    5464 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
I0910 19:02:03.330365    5464 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
I0910 19:02:03.352559    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0910 19:02:03.402857    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0910 19:02:03.447296    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0910 19:02:03.492201    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0910 19:02:03.536743    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
I0910 19:02:03.585483    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0910 19:02:03.631378    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0910 19:02:03.684302    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0910 19:02:03.729730    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /usr/share/ca-certificates/47242.pem (1708 bytes)
I0910 19:02:03.773937    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0910 19:02:03.815905    5464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem --> /usr/share/ca-certificates/4724.pem (1338 bytes)
I0910 19:02:03.860247    5464 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
I0910 19:02:03.903738    5464 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
I0910 19:02:03.935631    5464 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
I0910 19:02:03.966496    5464 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
I0910 19:02:04.004163    5464 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
I0910 19:02:04.037800    5464 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
I0910 19:02:04.068511    5464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
I0910 19:02:04.109047    5464 ssh_runner.go:195] Run: openssl version
I0910 19:02:04.126570    5464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47242.pem && ln -fs /usr/share/ca-certificates/47242.pem /etc/ssl/certs/47242.pem"
I0910 19:02:04.153524    5464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47242.pem
I0910 19:02:04.160845    5464 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
I0910 19:02:04.169231    5464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47242.pem
I0910 19:02:04.188495    5464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47242.pem /etc/ssl/certs/3ec20f2e.0"
I0910 19:02:04.217943    5464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0910 19:02:04.248833    5464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0910 19:02:04.255564    5464 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
I0910 19:02:04.264526    5464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0910 19:02:04.281818    5464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0910 19:02:04.309846    5464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4724.pem && ln -fs /usr/share/ca-certificates/4724.pem /etc/ssl/certs/4724.pem"
I0910 19:02:04.337268    5464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4724.pem
I0910 19:02:04.344847    5464 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
I0910 19:02:04.353611    5464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4724.pem
I0910 19:02:04.369670    5464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4724.pem /etc/ssl/certs/51391683.0"
I0910 19:02:04.401440    5464 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0910 19:02:04.419260    5464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0910 19:02:04.437043    5464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0910 19:02:04.454028    5464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0910 19:02:04.473587    5464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0910 19:02:04.491984    5464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0910 19:02:04.509967    5464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0910 19:02:04.519711    5464 kubeadm.go:934] updating node {m02 172.31.210.72 8443 v1.31.0 docker true true} ...
I0910 19:02:04.519711    5464 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket

                                                
                                                
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-301400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.210.72

                                                
                                                
[Install]
config:
{KubernetesVersion:v1.31.0 ClusterName:ha-301400 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0910 19:02:04.519711    5464 kube-vip.go:115] generating kube-vip config ...
I0910 19:02:04.528576    5464 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0910 19:02:04.554746    5464 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
I0910 19:02:04.555003    5464 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 172.31.223.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.8.0
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I0910 19:02:04.565709    5464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
I0910 19:02:04.581754    5464 binaries.go:44] Found k8s binaries, skipping transfer
I0910 19:02:04.591144    5464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
I0910 19:02:04.612396    5464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I0910 19:02:04.643932    5464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0910 19:02:04.674081    5464 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
I0910 19:02:04.717640    5464 ssh_runner.go:195] Run: grep 172.31.223.254	control-plane.minikube.internal$ /etc/hosts
I0910 19:02:04.723705    5464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0910 19:02:04.752374    5464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0910 19:02:04.950182    5464 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0910 19:02:04.978010    5464 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.31.210.72 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0910 19:02:04.978064    5464 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0910 19:02:04.979050    5464 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 19:02:04.981337    5464 out.go:177] * Verifying Kubernetes components...
I0910 19:02:04.984406    5464 out.go:177] * Enabled addons: 
I0910 19:02:04.988681    5464 addons.go:510] duration metric: took 10.67ms for enable addons: enabled=[]
I0910 19:02:04.994335    5464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0910 19:02:05.203250    5464 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0910 19:02:05.233879    5464 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
I0910 19:02:05.234300    5464 kapi.go:59] client config for ha-301400: &rest.Config{Host:"https://172.31.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-301400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-301400\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
W0910 19:02:05.234300    5464 kubeadm.go:483] Overriding stale ClientConfig host https://172.31.223.254:8443 with https://172.31.216.168:8443
I0910 19:02:05.236434    5464 cert_rotation.go:140] Starting client certificate rotation controller
I0910 19:02:05.236720    5464 node_ready.go:35] waiting up to 6m0s for node "ha-301400-m02" to be "Ready" ...
I0910 19:02:05.237019    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:05.237019    5464 round_trippers.go:469] Request Headers:
I0910 19:02:05.237019    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:05.237019    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:05.250139    5464 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
I0910 19:02:05.750766    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:05.750766    5464 round_trippers.go:469] Request Headers:
I0910 19:02:05.750766    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:05.750766    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:05.755599    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:05.756360    5464 node_ready.go:49] node "ha-301400-m02" has status "Ready":"True"
I0910 19:02:05.756360    5464 node_ready.go:38] duration metric: took 519.493ms for node "ha-301400-m02" to be "Ready" ...
I0910 19:02:05.756360    5464 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0910 19:02:05.756501    5464 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0910 19:02:05.756574    5464 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0910 19:02:05.756658    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
I0910 19:02:05.756658    5464 round_trippers.go:469] Request Headers:
I0910 19:02:05.756658    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:05.756658    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:05.764184    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0910 19:02:05.776127    5464 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fsbwc" in "kube-system" namespace to be "Ready" ...
I0910 19:02:05.777063    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-fsbwc
I0910 19:02:05.777063    5464 round_trippers.go:469] Request Headers:
I0910 19:02:05.777063    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:05.777063    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:05.780930    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:05.781823    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
I0910 19:02:05.781823    5464 round_trippers.go:469] Request Headers:
I0910 19:02:05.781888    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:05.781888    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:05.788020    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0910 19:02:05.788020    5464 pod_ready.go:93] pod "coredns-6f6b679f8f-fsbwc" in "kube-system" namespace has status "Ready":"True"
I0910 19:02:05.788020    5464 pod_ready.go:82] duration metric: took 11.8928ms for pod "coredns-6f6b679f8f-fsbwc" in "kube-system" namespace to be "Ready" ...
I0910 19:02:05.788020    5464 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ntqxc" in "kube-system" namespace to be "Ready" ...
I0910 19:02:05.788020    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ntqxc
I0910 19:02:05.788020    5464 round_trippers.go:469] Request Headers:
I0910 19:02:05.788020    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:05.788020    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:05.793000    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:05.793000    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
I0910 19:02:05.793000    5464 round_trippers.go:469] Request Headers:
I0910 19:02:05.793000    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:05.793000    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:05.798002    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:05.798002    5464 pod_ready.go:93] pod "coredns-6f6b679f8f-ntqxc" in "kube-system" namespace has status "Ready":"True"
I0910 19:02:05.798002    5464 pod_ready.go:82] duration metric: took 9.9809ms for pod "coredns-6f6b679f8f-ntqxc" in "kube-system" namespace to be "Ready" ...
I0910 19:02:05.798002    5464 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-301400" in "kube-system" namespace to be "Ready" ...
I0910 19:02:05.798987    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400
I0910 19:02:05.798987    5464 round_trippers.go:469] Request Headers:
I0910 19:02:05.798987    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:05.798987    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:05.802003    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:05.803328    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
I0910 19:02:05.803328    5464 round_trippers.go:469] Request Headers:
I0910 19:02:05.803385    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:05.803385    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:05.806013    5464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0910 19:02:05.807269    5464 pod_ready.go:93] pod "etcd-ha-301400" in "kube-system" namespace has status "Ready":"True"
I0910 19:02:05.807335    5464 pod_ready.go:82] duration metric: took 9.3325ms for pod "etcd-ha-301400" in "kube-system" namespace to be "Ready" ...
I0910 19:02:05.807335    5464 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
I0910 19:02:05.807399    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:05.807462    5464 round_trippers.go:469] Request Headers:
I0910 19:02:05.807462    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:05.807462    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:05.810007    5464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0910 19:02:05.811002    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:05.811002    5464 round_trippers.go:469] Request Headers:
I0910 19:02:05.811002    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:05.811002    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:05.814007    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:06.320907    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:06.320907    5464 round_trippers.go:469] Request Headers:
I0910 19:02:06.321229    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:06.321229    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:06.325894    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:06.326630    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:06.326630    5464 round_trippers.go:469] Request Headers:
I0910 19:02:06.326630    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:06.326630    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:06.331220    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:06.811485    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:06.811485    5464 round_trippers.go:469] Request Headers:
I0910 19:02:06.811550    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:06.811550    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:06.839433    5464 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
I0910 19:02:06.840831    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:06.840831    5464 round_trippers.go:469] Request Headers:
I0910 19:02:06.840831    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:06.840831    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:06.869432    5464 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
I0910 19:02:07.315658    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:07.315658    5464 round_trippers.go:469] Request Headers:
I0910 19:02:07.315658    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:07.315826    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:07.321218    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:07.322185    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:07.322185    5464 round_trippers.go:469] Request Headers:
I0910 19:02:07.322185    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:07.322185    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:07.326802    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:07.822004    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:07.822004    5464 round_trippers.go:469] Request Headers:
I0910 19:02:07.822004    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:07.822004    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:07.829573    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0910 19:02:07.830462    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:07.830566    5464 round_trippers.go:469] Request Headers:
I0910 19:02:07.830566    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:07.830566    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:07.835571    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:07.836517    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:08.313465    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:08.313465    5464 round_trippers.go:469] Request Headers:
I0910 19:02:08.313465    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:08.313465    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:08.324475    5464 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
I0910 19:02:08.326462    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:08.326462    5464 round_trippers.go:469] Request Headers:
I0910 19:02:08.326535    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:08.326535    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:08.333216    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0910 19:02:08.820004    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:08.820004    5464 round_trippers.go:469] Request Headers:
I0910 19:02:08.820004    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:08.820004    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:08.825993    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:08.827406    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:08.827406    5464 round_trippers.go:469] Request Headers:
I0910 19:02:08.827474    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:08.827584    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:08.832262    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:09.311454    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:09.311485    5464 round_trippers.go:469] Request Headers:
I0910 19:02:09.311536    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:09.311536    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:09.319375    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0910 19:02:09.320029    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:09.320029    5464 round_trippers.go:469] Request Headers:
I0910 19:02:09.320029    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:09.320029    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:09.323597    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:09.816789    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:09.816789    5464 round_trippers.go:469] Request Headers:
I0910 19:02:09.816789    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:09.816789    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:09.821926    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:09.822677    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:09.822677    5464 round_trippers.go:469] Request Headers:
I0910 19:02:09.822677    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:09.822677    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:09.828554    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:10.309869    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:10.309954    5464 round_trippers.go:469] Request Headers:
I0910 19:02:10.309954    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:10.309954    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:10.314486    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:10.316179    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:10.316252    5464 round_trippers.go:469] Request Headers:
I0910 19:02:10.316252    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:10.316252    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:10.320478    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:10.321784    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:10.818614    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:10.818848    5464 round_trippers.go:469] Request Headers:
I0910 19:02:10.818848    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:10.818848    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:10.823248    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:10.825022    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:10.825086    5464 round_trippers.go:469] Request Headers:
I0910 19:02:10.825086    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:10.825086    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:10.829753    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:11.314665    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:11.314738    5464 round_trippers.go:469] Request Headers:
I0910 19:02:11.314738    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:11.314738    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:11.322404    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0910 19:02:11.323654    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:11.323700    5464 round_trippers.go:469] Request Headers:
I0910 19:02:11.323700    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:11.323700    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:11.328563    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:11.817291    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:11.817291    5464 round_trippers.go:469] Request Headers:
I0910 19:02:11.817291    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:11.817291    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:11.821891    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:11.822989    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:11.822989    5464 round_trippers.go:469] Request Headers:
I0910 19:02:11.822989    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:11.822989    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:11.827956    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:12.317834    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:12.317834    5464 round_trippers.go:469] Request Headers:
I0910 19:02:12.317834    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:12.317834    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:12.322820    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:12.324673    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:12.324673    5464 round_trippers.go:469] Request Headers:
I0910 19:02:12.324673    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:12.324775    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:12.328926    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:12.330339    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:12.819541    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:12.819541    5464 round_trippers.go:469] Request Headers:
I0910 19:02:12.819626    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:12.819626    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:12.830798    5464 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
I0910 19:02:12.832554    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:12.832554    5464 round_trippers.go:469] Request Headers:
I0910 19:02:12.832554    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:12.832649    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:12.835989    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:13.320940    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:13.320940    5464 round_trippers.go:469] Request Headers:
I0910 19:02:13.320940    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:13.320940    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:13.325517    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:13.326549    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:13.326549    5464 round_trippers.go:469] Request Headers:
I0910 19:02:13.326549    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:13.326549    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:13.330796    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:13.820567    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:13.821123    5464 round_trippers.go:469] Request Headers:
I0910 19:02:13.821123    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:13.821123    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:13.828733    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0910 19:02:13.829500    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:13.829500    5464 round_trippers.go:469] Request Headers:
I0910 19:02:13.829500    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:13.829500    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:13.833619    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:14.309847    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:14.309847    5464 round_trippers.go:469] Request Headers:
I0910 19:02:14.309847    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:14.309937    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:14.314093    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:14.316018    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:14.316018    5464 round_trippers.go:469] Request Headers:
I0910 19:02:14.316080    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:14.316080    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:14.322945    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0910 19:02:14.810363    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:14.810428    5464 round_trippers.go:469] Request Headers:
I0910 19:02:14.810428    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:14.810428    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:14.815080    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:14.816801    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:14.816900    5464 round_trippers.go:469] Request Headers:
I0910 19:02:14.816900    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:14.816900    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:14.821257    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:14.821768    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:15.313088    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:15.313160    5464 round_trippers.go:469] Request Headers:
I0910 19:02:15.313160    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:15.313234    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:15.317597    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:15.319726    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:15.319726    5464 round_trippers.go:469] Request Headers:
I0910 19:02:15.319726    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:15.319726    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:15.325929    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0910 19:02:15.815085    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:15.815159    5464 round_trippers.go:469] Request Headers:
I0910 19:02:15.815159    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:15.815159    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:15.819375    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:15.820818    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:15.820919    5464 round_trippers.go:469] Request Headers:
I0910 19:02:15.820919    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:15.820919    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:15.825120    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:16.314633    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:16.314633    5464 round_trippers.go:469] Request Headers:
I0910 19:02:16.314633    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:16.314633    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:16.320683    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0910 19:02:16.323837    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:16.323917    5464 round_trippers.go:469] Request Headers:
I0910 19:02:16.323917    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:16.324003    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:16.328679    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:16.812763    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:16.812954    5464 round_trippers.go:469] Request Headers:
I0910 19:02:16.812954    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:16.812954    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:16.825325    5464 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
I0910 19:02:16.826318    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:16.826318    5464 round_trippers.go:469] Request Headers:
I0910 19:02:16.826318    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:16.826318    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:16.831168    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:16.831892    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:17.310487    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:17.310577    5464 round_trippers.go:469] Request Headers:
I0910 19:02:17.310577    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:17.310577    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:17.314857    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:17.316359    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:17.316359    5464 round_trippers.go:469] Request Headers:
I0910 19:02:17.316438    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:17.316438    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:17.322338    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:17.808541    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:17.808630    5464 round_trippers.go:469] Request Headers:
I0910 19:02:17.808630    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:17.808630    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:17.818760    5464 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
I0910 19:02:17.819763    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:17.819763    5464 round_trippers.go:469] Request Headers:
I0910 19:02:17.819763    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:17.819763    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:17.824891    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:18.323228    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:18.323228    5464 round_trippers.go:469] Request Headers:
I0910 19:02:18.323303    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:18.323303    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:18.328092    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:18.329856    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:18.329856    5464 round_trippers.go:469] Request Headers:
I0910 19:02:18.329856    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:18.329856    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:18.334274    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:18.810331    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:18.810331    5464 round_trippers.go:469] Request Headers:
I0910 19:02:18.810405    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:18.810405    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:18.815209    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:18.816730    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:18.816803    5464 round_trippers.go:469] Request Headers:
I0910 19:02:18.816803    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:18.816803    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:18.820702    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:19.309784    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:19.309920    5464 round_trippers.go:469] Request Headers:
I0910 19:02:19.309920    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:19.309920    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:19.314803    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:19.316593    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:19.316593    5464 round_trippers.go:469] Request Headers:
I0910 19:02:19.316682    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:19.316682    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:19.321222    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:19.321869    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:19.811721    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:19.811721    5464 round_trippers.go:469] Request Headers:
I0910 19:02:19.811721    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:19.811721    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:19.817237    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:19.818233    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:19.818306    5464 round_trippers.go:469] Request Headers:
I0910 19:02:19.818306    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:19.818306    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:19.821451    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:20.313224    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:20.313224    5464 round_trippers.go:469] Request Headers:
I0910 19:02:20.313317    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:20.313317    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:20.318543    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:20.319499    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:20.319499    5464 round_trippers.go:469] Request Headers:
I0910 19:02:20.319499    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:20.319499    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:20.322674    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:20.812749    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:20.812861    5464 round_trippers.go:469] Request Headers:
I0910 19:02:20.812861    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:20.812861    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:20.819006    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0910 19:02:20.819940    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:20.819940    5464 round_trippers.go:469] Request Headers:
I0910 19:02:20.819940    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:20.819940    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:20.827087    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0910 19:02:21.314353    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:21.314353    5464 round_trippers.go:469] Request Headers:
I0910 19:02:21.314353    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:21.314353    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:21.318635    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:21.320983    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:21.320983    5464 round_trippers.go:469] Request Headers:
I0910 19:02:21.321078    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:21.321078    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:21.326431    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:21.327440    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:21.818102    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:21.818102    5464 round_trippers.go:469] Request Headers:
I0910 19:02:21.818102    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:21.818102    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:21.833268    5464 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
I0910 19:02:21.834956    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:21.834956    5464 round_trippers.go:469] Request Headers:
I0910 19:02:21.834956    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:21.834956    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:21.859079    5464 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
I0910 19:02:22.320601    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:22.320601    5464 round_trippers.go:469] Request Headers:
I0910 19:02:22.320601    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:22.320601    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:22.327968    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0910 19:02:22.328713    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:22.328713    5464 round_trippers.go:469] Request Headers:
I0910 19:02:22.328713    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:22.328713    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:22.333444    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:22.818291    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:22.818291    5464 round_trippers.go:469] Request Headers:
I0910 19:02:22.818291    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:22.818291    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:22.824361    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:22.825680    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:22.825680    5464 round_trippers.go:469] Request Headers:
I0910 19:02:22.825680    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:22.825680    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:22.828977    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:23.316518    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:23.316518    5464 round_trippers.go:469] Request Headers:
I0910 19:02:23.316518    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:23.316518    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:23.323113    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0910 19:02:23.324826    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:23.324826    5464 round_trippers.go:469] Request Headers:
I0910 19:02:23.324826    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:23.324826    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:23.335007    5464 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
I0910 19:02:23.335007    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:23.815511    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:23.815511    5464 round_trippers.go:469] Request Headers:
I0910 19:02:23.815511    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:23.815511    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:23.823325    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0910 19:02:23.824400    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:23.824464    5464 round_trippers.go:469] Request Headers:
I0910 19:02:23.824520    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:23.824580    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:23.829740    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:24.320407    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:24.320407    5464 round_trippers.go:469] Request Headers:
I0910 19:02:24.320407    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:24.320407    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:24.325778    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:24.326952    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:24.327057    5464 round_trippers.go:469] Request Headers:
I0910 19:02:24.327057    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:24.327057    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:24.332622    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:24.822531    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:24.822531    5464 round_trippers.go:469] Request Headers:
I0910 19:02:24.822531    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:24.822531    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:24.830827    5464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0910 19:02:24.832582    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:24.832582    5464 round_trippers.go:469] Request Headers:
I0910 19:02:24.832582    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:24.832582    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:24.837010    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:25.324546    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:25.324640    5464 round_trippers.go:469] Request Headers:
I0910 19:02:25.324640    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:25.324640    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:25.329774    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:25.330766    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:25.330857    5464 round_trippers.go:469] Request Headers:
I0910 19:02:25.330857    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:25.330857    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:25.335021    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:25.335750    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:25.823952    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:25.823952    5464 round_trippers.go:469] Request Headers:
I0910 19:02:25.824022    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:25.824022    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:25.828662    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:25.830385    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:25.830385    5464 round_trippers.go:469] Request Headers:
I0910 19:02:25.830385    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:25.830385    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:25.833557    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:26.310243    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:26.310243    5464 round_trippers.go:469] Request Headers:
I0910 19:02:26.310243    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:26.310243    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:26.316241    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:26.317650    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:26.317650    5464 round_trippers.go:469] Request Headers:
I0910 19:02:26.317650    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:26.317650    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:26.322367    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:26.810965    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:26.811053    5464 round_trippers.go:469] Request Headers:
I0910 19:02:26.811053    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:26.811053    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:26.816970    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:26.818660    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:26.818660    5464 round_trippers.go:469] Request Headers:
I0910 19:02:26.818719    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:26.818719    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:26.823443    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:27.311784    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:27.311784    5464 round_trippers.go:469] Request Headers:
I0910 19:02:27.311784    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:27.311784    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:27.316969    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:27.317344    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:27.317344    5464 round_trippers.go:469] Request Headers:
I0910 19:02:27.317344    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:27.317344    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:27.321595    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:27.815053    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:27.815153    5464 round_trippers.go:469] Request Headers:
I0910 19:02:27.815153    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:27.815153    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:27.822824    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0910 19:02:27.822824    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:27.823862    5464 round_trippers.go:469] Request Headers:
I0910 19:02:27.823862    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:27.823862    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:27.840821    5464 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
I0910 19:02:27.841510    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:28.315315    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:28.315315    5464 round_trippers.go:469] Request Headers:
I0910 19:02:28.315315    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:28.315402    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:28.322963    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0910 19:02:28.324607    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:28.324782    5464 round_trippers.go:469] Request Headers:
I0910 19:02:28.324782    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:28.324782    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:28.329073    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:28.817483    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:28.817483    5464 round_trippers.go:469] Request Headers:
I0910 19:02:28.817483    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:28.817483    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:28.822880    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:28.823902    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:28.823902    5464 round_trippers.go:469] Request Headers:
I0910 19:02:28.823902    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:28.823902    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:28.829820    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:29.317801    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:29.317892    5464 round_trippers.go:469] Request Headers:
I0910 19:02:29.317968    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:29.317968    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:29.322334    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:29.324978    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:29.324978    5464 round_trippers.go:469] Request Headers:
I0910 19:02:29.325038    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:29.325038    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:29.330014    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:29.817673    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:29.817885    5464 round_trippers.go:469] Request Headers:
I0910 19:02:29.817885    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:29.817885    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:29.832250    5464 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
I0910 19:02:29.833610    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:29.833610    5464 round_trippers.go:469] Request Headers:
I0910 19:02:29.833610    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:29.833610    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:29.847021    5464 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
I0910 19:02:29.847955    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:30.318509    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:30.318509    5464 round_trippers.go:469] Request Headers:
I0910 19:02:30.318509    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:30.318509    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:30.323106    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:30.325137    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:30.325137    5464 round_trippers.go:469] Request Headers:
I0910 19:02:30.325137    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:30.325137    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:30.333706    5464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0910 19:02:30.820843    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:30.820843    5464 round_trippers.go:469] Request Headers:
I0910 19:02:30.820843    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:30.820843    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:30.826425    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:30.828194    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:30.828278    5464 round_trippers.go:469] Request Headers:
I0910 19:02:30.828278    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:30.828278    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:30.836430    5464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0910 19:02:31.321550    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:31.321609    5464 round_trippers.go:469] Request Headers:
I0910 19:02:31.321609    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:31.321609    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:31.326406    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:31.327151    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:31.327235    5464 round_trippers.go:469] Request Headers:
I0910 19:02:31.327235    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:31.327235    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:31.334088    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0910 19:02:31.821103    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:31.821390    5464 round_trippers.go:469] Request Headers:
I0910 19:02:31.821390    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:31.821390    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:31.829293    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0910 19:02:31.830625    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:31.830717    5464 round_trippers.go:469] Request Headers:
I0910 19:02:31.830717    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:31.830717    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:31.834903    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:32.323277    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:32.323277    5464 round_trippers.go:469] Request Headers:
I0910 19:02:32.323277    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:32.323277    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:32.328376    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:32.329823    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:32.329894    5464 round_trippers.go:469] Request Headers:
I0910 19:02:32.329894    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:32.329894    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:32.333156    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:32.334934    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:32.822217    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:32.822293    5464 round_trippers.go:469] Request Headers:
I0910 19:02:32.822293    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:32.822293    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:32.828035    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:32.829575    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:32.829575    5464 round_trippers.go:469] Request Headers:
I0910 19:02:32.829575    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:32.829686    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:32.833860    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:33.322986    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:33.322986    5464 round_trippers.go:469] Request Headers:
I0910 19:02:33.322986    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:33.322986    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:33.328201    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:33.328984    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:33.329049    5464 round_trippers.go:469] Request Headers:
I0910 19:02:33.329049    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:33.329049    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:33.335806    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0910 19:02:33.822631    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:33.822738    5464 round_trippers.go:469] Request Headers:
I0910 19:02:33.822738    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:33.822738    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:33.830089    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0910 19:02:33.831000    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:33.831000    5464 round_trippers.go:469] Request Headers:
I0910 19:02:33.831152    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:33.831152    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:33.835236    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:34.321734    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:34.321986    5464 round_trippers.go:469] Request Headers:
I0910 19:02:34.321986    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:34.321986    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:34.326383    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:34.327866    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:34.327866    5464 round_trippers.go:469] Request Headers:
I0910 19:02:34.327866    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:34.327866    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:34.334124    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0910 19:02:34.821621    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:34.821858    5464 round_trippers.go:469] Request Headers:
I0910 19:02:34.821858    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:34.821858    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:34.826648    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:34.828727    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:34.828727    5464 round_trippers.go:469] Request Headers:
I0910 19:02:34.828783    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:34.828783    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:34.833694    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:34.833931    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:35.323887    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:35.324139    5464 round_trippers.go:469] Request Headers:
I0910 19:02:35.324139    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:35.324139    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:35.333064    5464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0910 19:02:35.333964    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:35.333964    5464 round_trippers.go:469] Request Headers:
I0910 19:02:35.333964    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:35.333964    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:35.337552    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:35.812155    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:35.812155    5464 round_trippers.go:469] Request Headers:
I0910 19:02:35.812155    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:35.812155    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:35.819278    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0910 19:02:35.821223    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:35.821223    5464 round_trippers.go:469] Request Headers:
I0910 19:02:35.821223    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:35.821223    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:35.826537    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:36.313147    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:36.313147    5464 round_trippers.go:469] Request Headers:
I0910 19:02:36.313147    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:36.313147    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:36.319728    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0910 19:02:36.321415    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:36.321526    5464 round_trippers.go:469] Request Headers:
I0910 19:02:36.321526    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:36.321526    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:36.329736    5464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0910 19:02:36.813489    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:36.813588    5464 round_trippers.go:469] Request Headers:
I0910 19:02:36.813588    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:36.813588    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:36.821983    5464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0910 19:02:36.823639    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:36.823695    5464 round_trippers.go:469] Request Headers:
I0910 19:02:36.823695    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:36.823695    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:36.828777    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:37.314263    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:37.314494    5464 round_trippers.go:469] Request Headers:
I0910 19:02:37.314494    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:37.314494    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:37.319310    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:37.320995    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:37.321086    5464 round_trippers.go:469] Request Headers:
I0910 19:02:37.321086    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:37.321086    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:37.326291    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:37.327060    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:37.817583    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:37.817583    5464 round_trippers.go:469] Request Headers:
I0910 19:02:37.817583    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:37.817583    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:37.823024    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:37.824454    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:37.824527    5464 round_trippers.go:469] Request Headers:
I0910 19:02:37.824527    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:37.824527    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:37.830997    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0910 19:02:38.316796    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:38.317028    5464 round_trippers.go:469] Request Headers:
I0910 19:02:38.317028    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:38.317120    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:38.322723    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:38.323689    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:38.323689    5464 round_trippers.go:469] Request Headers:
I0910 19:02:38.323689    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:38.323689    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:38.327266    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:38.820626    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:38.820716    5464 round_trippers.go:469] Request Headers:
I0910 19:02:38.820716    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:38.820716    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:38.825935    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:38.827917    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:38.827992    5464 round_trippers.go:469] Request Headers:
I0910 19:02:38.827992    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:38.827992    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:38.832659    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:39.320669    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:39.320669    5464 round_trippers.go:469] Request Headers:
I0910 19:02:39.320820    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:39.320820    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:39.325562    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:39.327373    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:39.327430    5464 round_trippers.go:469] Request Headers:
I0910 19:02:39.327430    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:39.327430    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:39.331038    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:39.332705    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:39.821168    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:39.821168    5464 round_trippers.go:469] Request Headers:
I0910 19:02:39.821257    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:39.821257    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:39.826662    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:39.827701    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:39.827701    5464 round_trippers.go:469] Request Headers:
I0910 19:02:39.827701    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:39.827701    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:39.835224    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0910 19:02:40.317616    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:40.317669    5464 round_trippers.go:469] Request Headers:
I0910 19:02:40.317669    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:40.317702    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:40.326134    5464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0910 19:02:40.327392    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:40.327448    5464 round_trippers.go:469] Request Headers:
I0910 19:02:40.327448    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:40.327448    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:40.330868    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:40.816632    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:40.816632    5464 round_trippers.go:469] Request Headers:
I0910 19:02:40.816632    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:40.816632    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:40.822175    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:40.822829    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:40.822829    5464 round_trippers.go:469] Request Headers:
I0910 19:02:40.822829    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:40.822829    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:40.831299    5464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0910 19:02:41.317218    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:41.317218    5464 round_trippers.go:469] Request Headers:
I0910 19:02:41.317218    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:41.317218    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:41.321493    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:41.322157    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:41.322764    5464 round_trippers.go:469] Request Headers:
I0910 19:02:41.322764    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:41.322764    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:41.326806    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:41.817845    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:41.818122    5464 round_trippers.go:469] Request Headers:
I0910 19:02:41.818122    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:41.818122    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:41.822533    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:41.824555    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:41.824614    5464 round_trippers.go:469] Request Headers:
I0910 19:02:41.824614    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:41.824614    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:41.829106    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:41.829876    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:42.318494    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:42.318874    5464 round_trippers.go:469] Request Headers:
I0910 19:02:42.318874    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:42.318963    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:42.323606    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:42.324973    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:42.324973    5464 round_trippers.go:469] Request Headers:
I0910 19:02:42.324973    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:42.324973    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:42.329228    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:42.820475    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:42.820573    5464 round_trippers.go:469] Request Headers:
I0910 19:02:42.820573    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:42.820573    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:42.826034    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:42.827217    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:42.827217    5464 round_trippers.go:469] Request Headers:
I0910 19:02:42.827217    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:42.827217    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:42.834209    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0910 19:02:43.319946    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:43.320190    5464 round_trippers.go:469] Request Headers:
I0910 19:02:43.320190    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:43.320190    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:43.330159    5464 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0910 19:02:43.331351    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:43.331351    5464 round_trippers.go:469] Request Headers:
I0910 19:02:43.331351    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:43.331351    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:43.334267    5464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0910 19:02:43.820063    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:43.820263    5464 round_trippers.go:469] Request Headers:
I0910 19:02:43.820263    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:43.820263    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:43.826495    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0910 19:02:43.827302    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:43.827302    5464 round_trippers.go:469] Request Headers:
I0910 19:02:43.827302    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:43.827302    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:43.831458    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:43.832506    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:44.317988    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:44.318064    5464 round_trippers.go:469] Request Headers:
I0910 19:02:44.318064    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:44.318132    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:44.325823    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0910 19:02:44.327539    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:44.327597    5464 round_trippers.go:469] Request Headers:
I0910 19:02:44.327597    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:44.327597    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:44.331580    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:44.821539    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:44.821539    5464 round_trippers.go:469] Request Headers:
I0910 19:02:44.821647    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:44.821647    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:44.829834    5464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0910 19:02:44.830754    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:44.830754    5464 round_trippers.go:469] Request Headers:
I0910 19:02:44.830754    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:44.830754    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:44.840505    5464 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0910 19:02:45.319343    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:45.319617    5464 round_trippers.go:469] Request Headers:
I0910 19:02:45.319617    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:45.319710    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:45.324192    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:45.325546    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:45.325546    5464 round_trippers.go:469] Request Headers:
I0910 19:02:45.325546    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:45.325546    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:45.329151    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:45.825538    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:45.825646    5464 round_trippers.go:469] Request Headers:
I0910 19:02:45.825732    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:45.825732    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:45.832599    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0910 19:02:45.833602    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:45.833602    5464 round_trippers.go:469] Request Headers:
I0910 19:02:45.833602    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:45.833602    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:45.838599    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:45.838599    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:46.320983    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:46.320983    5464 round_trippers.go:469] Request Headers:
I0910 19:02:46.321050    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:46.321050    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:46.327808    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0910 19:02:46.328697    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:46.328697    5464 round_trippers.go:469] Request Headers:
I0910 19:02:46.328697    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:46.328697    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:46.332927    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:46.816699    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:46.816733    5464 round_trippers.go:469] Request Headers:
I0910 19:02:46.816783    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:46.816783    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:46.822426    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:46.824224    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:46.824224    5464 round_trippers.go:469] Request Headers:
I0910 19:02:46.824224    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:46.824224    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:46.828415    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:47.322681    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:47.322681    5464 round_trippers.go:469] Request Headers:
I0910 19:02:47.322681    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:47.322681    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:47.328722    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0910 19:02:47.330515    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:47.330515    5464 round_trippers.go:469] Request Headers:
I0910 19:02:47.330622    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:47.330622    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:47.334864    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:47.820628    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:47.820628    5464 round_trippers.go:469] Request Headers:
I0910 19:02:47.820887    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:47.820887    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:47.825207    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:47.827616    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:47.827616    5464 round_trippers.go:469] Request Headers:
I0910 19:02:47.827616    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:47.827616    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:47.831990    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:48.320651    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:48.320946    5464 round_trippers.go:469] Request Headers:
I0910 19:02:48.320946    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:48.320946    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:48.326290    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:48.327599    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:48.327699    5464 round_trippers.go:469] Request Headers:
I0910 19:02:48.327699    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:48.327699    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:48.330868    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:48.332023    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:48.822296    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:48.822395    5464 round_trippers.go:469] Request Headers:
I0910 19:02:48.822395    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:48.822395    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:48.827355    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:48.828553    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:48.828553    5464 round_trippers.go:469] Request Headers:
I0910 19:02:48.828553    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:48.828640    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:48.834293    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:49.322999    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:49.322999    5464 round_trippers.go:469] Request Headers:
I0910 19:02:49.322999    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:49.322999    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:49.328299    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:49.329573    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:49.329630    5464 round_trippers.go:469] Request Headers:
I0910 19:02:49.329630    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:49.329630    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:49.334275    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:49.826617    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:49.826731    5464 round_trippers.go:469] Request Headers:
I0910 19:02:49.826731    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:49.826731    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:49.833906    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0910 19:02:49.834869    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:49.834869    5464 round_trippers.go:469] Request Headers:
I0910 19:02:49.834869    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:49.834869    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:49.838888    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:50.318879    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:50.318879    5464 round_trippers.go:469] Request Headers:
I0910 19:02:50.318879    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:50.318879    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:50.325886    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0910 19:02:50.327573    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:50.327573    5464 round_trippers.go:469] Request Headers:
I0910 19:02:50.327638    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:50.327638    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:50.331531    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:50.818973    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:50.818973    5464 round_trippers.go:469] Request Headers:
I0910 19:02:50.818973    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:50.818973    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:50.825750    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0910 19:02:50.826771    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:50.826771    5464 round_trippers.go:469] Request Headers:
I0910 19:02:50.826771    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:50.826771    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:50.830372    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:50.832003    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:51.321501    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:51.321567    5464 round_trippers.go:469] Request Headers:
I0910 19:02:51.321567    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:51.321567    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:51.326120    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:51.326120    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:51.326120    5464 round_trippers.go:469] Request Headers:
I0910 19:02:51.326120    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:51.326120    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:51.331766    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:51.823705    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:51.823974    5464 round_trippers.go:469] Request Headers:
I0910 19:02:51.823974    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:51.823974    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:51.829348    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:51.830969    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:51.830969    5464 round_trippers.go:469] Request Headers:
I0910 19:02:51.830969    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:51.831042    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:51.839180    5464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0910 19:02:52.311995    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:52.312055    5464 round_trippers.go:469] Request Headers:
I0910 19:02:52.312107    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:52.312107    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:52.317404    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:52.318404    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:52.318404    5464 round_trippers.go:469] Request Headers:
I0910 19:02:52.318404    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:52.318404    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:52.322481    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:52.813069    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:52.813158    5464 round_trippers.go:469] Request Headers:
I0910 19:02:52.813158    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:52.813158    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:52.820264    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0910 19:02:52.821001    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:52.821001    5464 round_trippers.go:469] Request Headers:
I0910 19:02:52.821001    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:52.821001    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:52.825590    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:53.313441    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:53.313687    5464 round_trippers.go:469] Request Headers:
I0910 19:02:53.313687    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:53.313687    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:53.321232    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0910 19:02:53.321232    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:53.321232    5464 round_trippers.go:469] Request Headers:
I0910 19:02:53.321232    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:53.321232    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:53.326126    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:53.327371    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:53.817484    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:53.817484    5464 round_trippers.go:469] Request Headers:
I0910 19:02:53.817484    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:53.817484    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:53.821791    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:53.823457    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:53.823457    5464 round_trippers.go:469] Request Headers:
I0910 19:02:53.823457    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:53.823457    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:53.827641    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:54.319828    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:54.320279    5464 round_trippers.go:469] Request Headers:
I0910 19:02:54.320279    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:54.320279    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:54.325331    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:54.327071    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:54.327137    5464 round_trippers.go:469] Request Headers:
I0910 19:02:54.327137    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:54.327137    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:54.331871    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:54.825108    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:54.825353    5464 round_trippers.go:469] Request Headers:
I0910 19:02:54.825353    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:54.825353    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:54.836115    5464 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
I0910 19:02:54.836115    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:54.836115    5464 round_trippers.go:469] Request Headers:
I0910 19:02:54.836115    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:54.836115    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:54.841805    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:55.314309    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:55.314309    5464 round_trippers.go:469] Request Headers:
I0910 19:02:55.314309    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:55.314309    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:55.318645    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:55.319841    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:55.319841    5464 round_trippers.go:469] Request Headers:
I0910 19:02:55.319841    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:55.319841    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:55.323256    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:55.813275    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:55.813275    5464 round_trippers.go:469] Request Headers:
I0910 19:02:55.813275    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:55.813344    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:55.819891    5464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0910 19:02:55.820845    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:55.820845    5464 round_trippers.go:469] Request Headers:
I0910 19:02:55.820845    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:55.820845    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:55.828511    5464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0910 19:02:55.829579    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"
I0910 19:02:56.326070    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:56.326156    5464 round_trippers.go:469] Request Headers:
I0910 19:02:56.326156    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:56.326156    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:56.339231    5464 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
I0910 19:02:56.340283    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:56.340350    5464 round_trippers.go:469] Request Headers:
I0910 19:02:56.340350    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:56.340350    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:56.344287    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:56.825489    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:56.825607    5464 round_trippers.go:469] Request Headers:
I0910 19:02:56.825607    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:56.825607    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:56.830921    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:56.832089    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:56.832089    5464 round_trippers.go:469] Request Headers:
I0910 19:02:56.832089    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:56.832089    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:56.835367    5464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0910 19:02:57.321506    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:57.321576    5464 round_trippers.go:469] Request Headers:
I0910 19:02:57.321646    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:57.321646    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:57.326987    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:57.328074    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:57.328174    5464 round_trippers.go:469] Request Headers:
I0910 19:02:57.328174    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:57.328174    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:57.332342    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:57.824581    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
I0910 19:02:57.824581    5464 round_trippers.go:469] Request Headers:
I0910 19:02:57.824647    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:57.824647    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:57.829213    5464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0910 19:02:57.830541    5464 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
I0910 19:02:57.830603    5464 round_trippers.go:469] Request Headers:
I0910 19:02:57.830603    5464 round_trippers.go:473]     Accept: application/json, */*
I0910 19:02:57.830603    5464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0910 19:02:57.836352    5464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0910 19:02:57.837347    5464 pod_ready.go:103] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"False"

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-windows-amd64.exe -p ha-301400 node start m02 -v=7 --alsologtostderr": exit status 1
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr: context deadline exceeded (83.5µs)
E0910 19:02:59.740740    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr: context deadline exceeded (42µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:432: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-301400 -n ha-301400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-301400 -n ha-301400: (10.7782914s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 logs -n 25: (7.8796325s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| ssh     | ha-301400 ssh -n                                                                                                          | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 18:54 UTC |
	|         | ha-301400-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-301400 cp ha-301400-m03:/home/docker/cp-test.txt                                                                       | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 18:55 UTC |
	|         | ha-301400:/home/docker/cp-test_ha-301400-m03_ha-301400.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-301400 ssh -n                                                                                                          | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:55 UTC | 10 Sep 24 18:55 UTC |
	|         | ha-301400-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-301400 ssh -n ha-301400 sudo cat                                                                                       | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:55 UTC | 10 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-301400-m03_ha-301400.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-301400 cp ha-301400-m03:/home/docker/cp-test.txt                                                                       | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:55 UTC | 10 Sep 24 18:55 UTC |
	|         | ha-301400-m02:/home/docker/cp-test_ha-301400-m03_ha-301400-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-301400 ssh -n                                                                                                          | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:55 UTC | 10 Sep 24 18:55 UTC |
	|         | ha-301400-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-301400 ssh -n ha-301400-m02 sudo cat                                                                                   | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:55 UTC | 10 Sep 24 18:56 UTC |
	|         | /home/docker/cp-test_ha-301400-m03_ha-301400-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-301400 cp ha-301400-m03:/home/docker/cp-test.txt                                                                       | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | ha-301400-m04:/home/docker/cp-test_ha-301400-m03_ha-301400-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-301400 ssh -n                                                                                                          | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | ha-301400-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-301400 ssh -n ha-301400-m04 sudo cat                                                                                   | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | /home/docker/cp-test_ha-301400-m03_ha-301400-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-301400 cp testdata\cp-test.txt                                                                                         | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | ha-301400-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-301400 ssh -n                                                                                                          | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | ha-301400-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-301400 cp ha-301400-m04:/home/docker/cp-test.txt                                                                       | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2953471519\001\cp-test_ha-301400-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-301400 ssh -n                                                                                                          | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:57 UTC |
	|         | ha-301400-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-301400 cp ha-301400-m04:/home/docker/cp-test.txt                                                                       | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:57 UTC | 10 Sep 24 18:57 UTC |
	|         | ha-301400:/home/docker/cp-test_ha-301400-m04_ha-301400.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-301400 ssh -n                                                                                                          | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:57 UTC | 10 Sep 24 18:57 UTC |
	|         | ha-301400-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-301400 ssh -n ha-301400 sudo cat                                                                                       | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:57 UTC | 10 Sep 24 18:57 UTC |
	|         | /home/docker/cp-test_ha-301400-m04_ha-301400.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-301400 cp ha-301400-m04:/home/docker/cp-test.txt                                                                       | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:57 UTC | 10 Sep 24 18:57 UTC |
	|         | ha-301400-m02:/home/docker/cp-test_ha-301400-m04_ha-301400-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-301400 ssh -n                                                                                                          | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:57 UTC | 10 Sep 24 18:57 UTC |
	|         | ha-301400-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-301400 ssh -n ha-301400-m02 sudo cat                                                                                   | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:57 UTC | 10 Sep 24 18:58 UTC |
	|         | /home/docker/cp-test_ha-301400-m04_ha-301400-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-301400 cp ha-301400-m04:/home/docker/cp-test.txt                                                                       | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:58 UTC | 10 Sep 24 18:58 UTC |
	|         | ha-301400-m03:/home/docker/cp-test_ha-301400-m04_ha-301400-m03.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-301400 ssh -n                                                                                                          | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:58 UTC | 10 Sep 24 18:58 UTC |
	|         | ha-301400-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-301400 ssh -n ha-301400-m03 sudo cat                                                                                   | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:58 UTC | 10 Sep 24 18:58 UTC |
	|         | /home/docker/cp-test_ha-301400-m04_ha-301400-m03.txt                                                                      |           |                   |         |                     |                     |
	| node    | ha-301400 node stop m02 -v=7                                                                                              | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 18:58 UTC | 10 Sep 24 18:59 UTC |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	| node    | ha-301400 node start m02 -v=7                                                                                             | ha-301400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:00 UTC |                     |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:32:58
	Running on machine: minikube5
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:32:58.228794   10084 out.go:345] Setting OutFile to fd 672 ...
	I0910 18:32:58.273534   10084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:32:58.273534   10084 out.go:358] Setting ErrFile to fd 1288...
	I0910 18:32:58.273534   10084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:32:58.292400   10084 out.go:352] Setting JSON to false
	I0910 18:32:58.294449   10084 start.go:129] hostinfo: {"hostname":"minikube5","uptime":104441,"bootTime":1725888736,"procs":179,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4842 Build 19045.4842","kernelVersion":"10.0.19045.4842 Build 19045.4842","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0910 18:32:58.294449   10084 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 18:32:58.298662   10084 out.go:177] * [ha-301400] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	I0910 18:32:58.304358   10084 notify.go:220] Checking for updates...
	I0910 18:32:58.304358   10084 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:32:58.307226   10084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:32:58.309546   10084 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0910 18:32:58.311640   10084 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:32:58.313397   10084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:32:58.316226   10084 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:33:03.082908   10084 out.go:177] * Using the hyperv driver based on user configuration
	I0910 18:33:03.087238   10084 start.go:297] selected driver: hyperv
	I0910 18:33:03.087238   10084 start.go:901] validating driver "hyperv" against <nil>
	I0910 18:33:03.087238   10084 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:33:03.127579   10084 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 18:33:03.128571   10084 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:33:03.128658   10084 cni.go:84] Creating CNI manager for ""
	I0910 18:33:03.128658   10084 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0910 18:33:03.128658   10084 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0910 18:33:03.128812   10084 start.go:340] cluster config:
	{Name:ha-301400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-301400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:33:03.129006   10084 iso.go:125] acquiring lock: {Name:mkb663b978f90cccd76348bddb3ea027f6ac4252 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:33:03.134487   10084 out.go:177] * Starting "ha-301400" primary control-plane node in "ha-301400" cluster
	I0910 18:33:03.139045   10084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 18:33:03.139207   10084 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0910 18:33:03.139289   10084 cache.go:56] Caching tarball of preloaded images
	I0910 18:33:03.139289   10084 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0910 18:33:03.139289   10084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 18:33:03.139289   10084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json ...
	I0910 18:33:03.140434   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json: {Name:mkdbfef16912851ddb95bf4da9e8b839c6383d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:33:03.141517   10084 start.go:360] acquireMachinesLock for ha-301400: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:33:03.141606   10084 start.go:364] duration metric: took 34.5µs to acquireMachinesLock for "ha-301400"
	I0910 18:33:03.141950   10084 start.go:93] Provisioning new machine with config: &{Name:ha-301400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.0 ClusterName:ha-301400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 18:33:03.142166   10084 start.go:125] createHost starting for "" (driver="hyperv")
	I0910 18:33:03.146455   10084 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 18:33:03.146753   10084 start.go:159] libmachine.API.Create for "ha-301400" (driver="hyperv")
	I0910 18:33:03.146829   10084 client.go:168] LocalClient.Create starting
	I0910 18:33:03.147329   10084 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0910 18:33:03.147588   10084 main.go:141] libmachine: Decoding PEM data...
	I0910 18:33:03.147655   10084 main.go:141] libmachine: Parsing certificate...
	I0910 18:33:03.147864   10084 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0910 18:33:03.148075   10084 main.go:141] libmachine: Decoding PEM data...
	I0910 18:33:03.148120   10084 main.go:141] libmachine: Parsing certificate...
	I0910 18:33:03.148215   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0910 18:33:05.004338   10084 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0910 18:33:05.004338   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:05.004620   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0910 18:33:06.552617   10084 main.go:141] libmachine: [stdout =====>] : False
	
	I0910 18:33:06.553226   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:06.553226   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0910 18:33:07.893295   10084 main.go:141] libmachine: [stdout =====>] : True
	
	I0910 18:33:07.893295   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:07.893388   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0910 18:33:11.070577   10084 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0910 18:33:11.070577   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:11.071993   10084 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 18:33:11.430865   10084 main.go:141] libmachine: Creating SSH key...
	I0910 18:33:11.534293   10084 main.go:141] libmachine: Creating VM...
	I0910 18:33:11.534293   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0910 18:33:14.070238   10084 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0910 18:33:14.070238   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:14.070352   10084 main.go:141] libmachine: Using switch "Default Switch"
	I0910 18:33:14.070552   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0910 18:33:15.659152   10084 main.go:141] libmachine: [stdout =====>] : True
	
	I0910 18:33:15.659319   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:15.659319   10084 main.go:141] libmachine: Creating VHD
	I0910 18:33:15.659319   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0910 18:33:19.007170   10084 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 04334C72-017E-4C24-A94A-4A6E611E26DF
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0910 18:33:19.007247   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:19.007247   10084 main.go:141] libmachine: Writing magic tar header
	I0910 18:33:19.007321   10084 main.go:141] libmachine: Writing SSH key tar header
	I0910 18:33:19.017343   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0910 18:33:21.977407   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:21.977407   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:21.978557   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\disk.vhd' -SizeBytes 20000MB
	I0910 18:33:24.310375   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:24.310375   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:24.310953   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-301400 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0910 18:33:27.584997   10084 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-301400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0910 18:33:27.584997   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:27.584997   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-301400 -DynamicMemoryEnabled $false
	I0910 18:33:29.586782   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:29.586782   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:29.586782   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-301400 -Count 2
	I0910 18:33:31.507273   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:31.507273   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:31.508185   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-301400 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\boot2docker.iso'
	I0910 18:33:33.818155   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:33.818155   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:33.818879   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-301400 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\disk.vhd'
	I0910 18:33:36.101335   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:36.101335   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:36.101335   10084 main.go:141] libmachine: Starting VM...
	I0910 18:33:36.101981   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-301400
	I0910 18:33:38.878991   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:38.878991   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:38.878991   10084 main.go:141] libmachine: Waiting for host to start...
	I0910 18:33:38.879496   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:33:40.905504   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:33:40.905504   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:40.905504   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:33:43.197917   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:43.197917   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:44.201990   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:33:46.169313   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:33:46.169313   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:46.170456   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:33:48.414184   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:48.414184   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:49.416861   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:33:51.352272   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:33:51.352272   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:51.352695   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:33:53.575399   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:53.575399   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:54.588365   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:33:56.541809   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:33:56.541856   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:56.541856   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:33:58.739376   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:33:58.739596   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:33:59.754450   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:01.729937   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:01.729937   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:01.730026   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:04.086786   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:04.086786   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:04.086869   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:06.052400   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:06.053319   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:06.053319   10084 machine.go:93] provisionDockerMachine start ...
	I0910 18:34:06.053319   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:08.017485   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:08.017485   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:08.017598   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:10.340818   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:10.340818   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:10.347224   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:34:10.360971   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.216.168 22 <nil> <nil>}
	I0910 18:34:10.360971   10084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:34:10.498909   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:34:10.498999   10084 buildroot.go:166] provisioning hostname "ha-301400"
	I0910 18:34:10.498999   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:12.413611   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:12.413611   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:12.413738   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:14.690664   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:14.690664   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:14.695308   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:34:14.695605   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.216.168 22 <nil> <nil>}
	I0910 18:34:14.695782   10084 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-301400 && echo "ha-301400" | sudo tee /etc/hostname
	I0910 18:34:14.858610   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-301400
	
	I0910 18:34:14.858840   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:16.780677   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:16.781642   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:16.781642   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:19.025595   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:19.025595   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:19.031327   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:34:19.031327   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.216.168 22 <nil> <nil>}
	I0910 18:34:19.031327   10084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-301400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-301400/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-301400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:34:19.178109   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:34:19.178109   10084 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0910 18:34:19.178109   10084 buildroot.go:174] setting up certificates
	I0910 18:34:19.178109   10084 provision.go:84] configureAuth start
	I0910 18:34:19.178109   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:21.067254   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:21.067619   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:21.067619   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:23.357238   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:23.357238   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:23.358176   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:25.243907   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:25.244092   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:25.244092   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:27.467224   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:27.467224   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:27.467224   10084 provision.go:143] copyHostCerts
	I0910 18:34:27.467224   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0910 18:34:27.467224   10084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0910 18:34:27.467224   10084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0910 18:34:27.468260   10084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0910 18:34:27.469227   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0910 18:34:27.469227   10084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0910 18:34:27.469227   10084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0910 18:34:27.469227   10084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0910 18:34:27.470229   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0910 18:34:27.470229   10084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0910 18:34:27.470229   10084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0910 18:34:27.470229   10084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0910 18:34:27.471227   10084 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-301400 san=[127.0.0.1 172.31.216.168 ha-301400 localhost minikube]
	I0910 18:34:27.741464   10084 provision.go:177] copyRemoteCerts
	I0910 18:34:27.749368   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:34:27.749368   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:29.669732   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:29.669732   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:29.670485   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:31.899271   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:31.899271   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:31.900175   10084 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 18:34:32.008680   10084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2589948s)
	I0910 18:34:32.008680   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0910 18:34:32.009052   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 18:34:32.054066   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0910 18:34:32.055041   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0910 18:34:32.092681   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0910 18:34:32.092681   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:34:32.134940   10084 provision.go:87] duration metric: took 12.9559642s to configureAuth
	I0910 18:34:32.134940   10084 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:34:32.135545   10084 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:34:32.135545   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:33.962770   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:33.962770   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:33.963062   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:36.209371   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:36.209371   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:36.215564   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:34:36.215564   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.216.168 22 <nil> <nil>}
	I0910 18:34:36.215564   10084 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 18:34:36.354095   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 18:34:36.354221   10084 buildroot.go:70] root file system type: tmpfs
	I0910 18:34:36.354519   10084 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 18:34:36.354519   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:38.250553   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:38.250553   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:38.251099   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:40.607577   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:40.607577   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:40.614022   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:34:40.614515   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.216.168 22 <nil> <nil>}
	I0910 18:34:40.614620   10084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 18:34:40.774118   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 18:34:40.774213   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:42.734020   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:42.734020   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:42.734531   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:45.000157   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:45.000157   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:45.004092   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:34:45.004567   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.216.168 22 <nil> <nil>}
	I0910 18:34:45.004567   10084 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 18:34:47.150506   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 18:34:47.150506   10084 machine.go:96] duration metric: took 41.0944394s to provisionDockerMachine
	I0910 18:34:47.151197   10084 client.go:171] duration metric: took 1m43.9974158s to LocalClient.Create
	I0910 18:34:47.151197   10084 start.go:167] duration metric: took 1m43.997536s to libmachine.API.Create "ha-301400"
	I0910 18:34:47.151197   10084 start.go:293] postStartSetup for "ha-301400" (driver="hyperv")
	I0910 18:34:47.151332   10084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:34:47.162151   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:34:47.162151   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:49.020072   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:49.021078   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:49.021143   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:51.277245   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:51.277245   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:51.278106   10084 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 18:34:51.389724   10084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2266551s)
	I0910 18:34:51.398951   10084 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:34:51.404411   10084 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:34:51.404465   10084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0910 18:34:51.404772   10084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0910 18:34:51.405431   10084 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> 47242.pem in /etc/ssl/certs
	I0910 18:34:51.405517   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /etc/ssl/certs/47242.pem
	I0910 18:34:51.414237   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:34:51.429404   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /etc/ssl/certs/47242.pem (1708 bytes)
	I0910 18:34:51.470448   10084 start.go:296] duration metric: took 4.3188259s for postStartSetup
	I0910 18:34:51.473342   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:53.323075   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:53.323159   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:53.323159   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:55.591204   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:55.591204   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:55.591423   10084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json ...
	I0910 18:34:55.593214   10084 start.go:128] duration metric: took 1m52.4435735s to createHost
	I0910 18:34:55.593214   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:34:57.494331   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:34:57.494331   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:57.494331   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:34:59.754743   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:34:59.754798   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:34:59.758312   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:34:59.758830   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.216.168 22 <nil> <nil>}
	I0910 18:34:59.758830   10084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:34:59.899077   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725993300.120154858
	
	I0910 18:34:59.899077   10084 fix.go:216] guest clock: 1725993300.120154858
	I0910 18:34:59.899077   10084 fix.go:229] Guest: 2024-09-10 18:35:00.120154858 +0000 UTC Remote: 2024-09-10 18:34:55.5932142 +0000 UTC m=+117.432237401 (delta=4.526940658s)
	I0910 18:34:59.899077   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:35:01.840804   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:35:01.840804   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:01.841002   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:35:04.167790   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:35:04.167868   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:04.171698   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:35:04.171768   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.216.168 22 <nil> <nil>}
	I0910 18:35:04.171768   10084 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1725993299
	I0910 18:35:04.311304   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Sep 10 18:34:59 UTC 2024
	
	I0910 18:35:04.311371   10084 fix.go:236] clock set: Tue Sep 10 18:34:59 UTC 2024
	 (err=<nil>)
	I0910 18:35:04.311371   10084 start.go:83] releasing machines lock for "ha-301400", held for 2m1.1615603s
	I0910 18:35:04.311598   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:35:06.304568   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:35:06.304568   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:06.305027   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:35:08.593147   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:35:08.593147   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:08.596476   10084 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0910 18:35:08.596571   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:35:08.603667   10084 ssh_runner.go:195] Run: cat /version.json
	I0910 18:35:08.603667   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:35:10.578401   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:35:10.578476   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:10.578549   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:35:10.604983   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:35:10.604983   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:10.604983   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:35:12.969630   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:35:12.969630   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:12.970437   10084 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 18:35:13.042074   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:35:13.042307   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:13.042568   10084 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 18:35:13.085116   10084 ssh_runner.go:235] Completed: cat /version.json: (4.4811474s)
	I0910 18:35:13.099377   10084 ssh_runner.go:195] Run: systemctl --version
	I0910 18:35:13.104988   10084 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.5082087s)
	W0910 18:35:13.105117   10084 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0910 18:35:13.124602   10084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:35:13.134090   10084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:35:13.143734   10084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:35:13.172614   10084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:35:13.172614   10084 start.go:495] detecting cgroup driver to use...
	I0910 18:35:13.172614   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:35:13.219389   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 18:35:13.247774   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 18:35:13.266283   10084 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	W0910 18:35:13.272436   10084 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0910 18:35:13.272436   10084 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0910 18:35:13.276274   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 18:35:13.305085   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 18:35:13.334448   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 18:35:13.361519   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 18:35:13.390882   10084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:35:13.420360   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 18:35:13.449104   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 18:35:13.476668   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 18:35:13.506878   10084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:35:13.533166   10084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:35:13.562006   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:35:13.769308   10084 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 18:35:13.800135   10084 start.go:495] detecting cgroup driver to use...
	I0910 18:35:13.810589   10084 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 18:35:13.839428   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:35:13.870320   10084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:35:13.906220   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:35:13.940595   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 18:35:13.972586   10084 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 18:35:14.030701   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 18:35:14.052989   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:35:14.099603   10084 ssh_runner.go:195] Run: which cri-dockerd
	I0910 18:35:14.113917   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 18:35:14.130926   10084 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 18:35:14.173663   10084 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 18:35:14.357458   10084 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 18:35:14.530732   10084 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 18:35:14.531024   10084 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 18:35:14.571764   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:35:14.760265   10084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 18:35:17.320078   10084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5596409s)
	I0910 18:35:17.333028   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 18:35:17.366618   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 18:35:17.402235   10084 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 18:35:17.589395   10084 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 18:35:17.777185   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:35:17.953962   10084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 18:35:17.992732   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 18:35:18.024684   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:35:18.208533   10084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 18:35:18.309370   10084 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 18:35:18.319958   10084 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 18:35:18.328202   10084 start.go:563] Will wait 60s for crictl version
	I0910 18:35:18.337659   10084 ssh_runner.go:195] Run: which crictl
	I0910 18:35:18.351666   10084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:35:18.401247   10084 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0910 18:35:18.409151   10084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 18:35:18.447244   10084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 18:35:18.484687   10084 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0910 18:35:18.484687   10084 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0910 18:35:18.489470   10084 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0910 18:35:18.489470   10084 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0910 18:35:18.489470   10084 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0910 18:35:18.489470   10084 ip.go:211] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:36:e6 Flags:up|broadcast|multicast|running}
	I0910 18:35:18.494481   10084 ip.go:214] interface addr: fe80::bc85:cd58:cadb:2533/64
	I0910 18:35:18.494481   10084 ip.go:214] interface addr: 172.31.208.1/20
	I0910 18:35:18.502470   10084 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0910 18:35:18.508477   10084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:35:18.540545   10084 kubeadm.go:883] updating cluster {Name:ha-301400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:ha-301400 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.216.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:35:18.540545   10084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 18:35:18.548562   10084 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 18:35:18.567569   10084 docker.go:685] Got preloaded images: 
	I0910 18:35:18.568546   10084 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0910 18:35:18.579665   10084 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 18:35:18.608957   10084 ssh_runner.go:195] Run: which lz4
	I0910 18:35:18.614459   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0910 18:35:18.623268   10084 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 18:35:18.629299   10084 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 18:35:18.629924   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0910 18:35:19.848393   10084 docker.go:649] duration metric: took 1.2338511s to copy over tarball
	I0910 18:35:19.859594   10084 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 18:35:28.631052   10084 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.7707755s)
	I0910 18:35:28.631185   10084 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 18:35:28.687015   10084 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 18:35:28.702230   10084 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0910 18:35:28.745365   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:35:28.948578   10084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 18:35:31.963149   10084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.0142602s)
	I0910 18:35:31.972782   10084 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 18:35:31.997624   10084 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0910 18:35:31.998355   10084 cache_images.go:84] Images are preloaded, skipping loading
	I0910 18:35:31.998355   10084 kubeadm.go:934] updating node { 172.31.216.168 8443 v1.31.0 docker true true} ...
	I0910 18:35:31.998355   10084 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-301400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.216.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-301400 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:35:32.005332   10084 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0910 18:35:32.068177   10084 cni.go:84] Creating CNI manager for ""
	I0910 18:35:32.068177   10084 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0910 18:35:32.068267   10084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:35:32.068267   10084 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.31.216.168 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-301400 NodeName:ha-301400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.31.216.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.31.216.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:35:32.068507   10084 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.31.216.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-301400"
	  kubeletExtraArgs:
	    node-ip: 172.31.216.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.31.216.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:35:32.068574   10084 kube-vip.go:115] generating kube-vip config ...
	I0910 18:35:32.075577   10084 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0910 18:35:32.100835   10084 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0910 18:35:32.101142   10084 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.31.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0910 18:35:32.112016   10084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:35:32.132106   10084 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:35:32.145182   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0910 18:35:32.161502   10084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0910 18:35:32.186513   10084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:35:32.214312   10084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0910 18:35:32.243944   10084 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0910 18:35:32.279881   10084 ssh_runner.go:195] Run: grep 172.31.223.254	control-plane.minikube.internal$ /etc/hosts
	I0910 18:35:32.285155   10084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:35:32.318149   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:35:32.493304   10084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:35:32.517188   10084 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400 for IP: 172.31.216.168
	I0910 18:35:32.517267   10084 certs.go:194] generating shared ca certs ...
	I0910 18:35:32.517267   10084 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:35:32.518138   10084 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0910 18:35:32.518805   10084 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0910 18:35:32.519039   10084 certs.go:256] generating profile certs ...
	I0910 18:35:32.519462   10084 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\client.key
	I0910 18:35:32.519462   10084 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\client.crt with IP's: []
	I0910 18:35:32.619052   10084 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\client.crt ...
	I0910 18:35:32.619052   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\client.crt: {Name:mk6e209ff46848639d08a2ede17d2fe10608f8a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:35:32.620690   10084 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\client.key ...
	I0910 18:35:32.620690   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\client.key: {Name:mk7e408047fe704d36613f0fda393cd3b1d4780b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:35:32.622332   10084 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.35399713
	I0910 18:35:32.622992   10084 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.35399713 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.216.168 172.31.223.254]
	I0910 18:35:32.683183   10084 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.35399713 ...
	I0910 18:35:32.683183   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.35399713: {Name:mk8d43549b50f7b266ef84c1b6f9fa794218fdeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:35:32.684108   10084 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.35399713 ...
	I0910 18:35:32.684108   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.35399713: {Name:mk959b47b63eed73b48541c01794c05704d529ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:35:32.685028   10084 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.35399713 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt
	I0910 18:35:32.698217   10084 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.35399713 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key
	I0910 18:35:32.698459   10084 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key
	I0910 18:35:32.698459   10084 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt with IP's: []
	I0910 18:35:33.033172   10084 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt ...
	I0910 18:35:33.033172   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt: {Name:mk2ab2d2c77377541aa5c23e4746f768eba7b54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:35:33.035001   10084 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key ...
	I0910 18:35:33.035001   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key: {Name:mkbdae0437ba8e6b1f313e812ff49f0f860c322a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:35:33.036119   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 18:35:33.036338   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0910 18:35:33.036338   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 18:35:33.036637   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 18:35:33.036637   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 18:35:33.036834   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 18:35:33.036968   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 18:35:33.048165   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 18:35:33.054439   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem (1338 bytes)
	W0910 18:35:33.055254   10084 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724_empty.pem, impossibly tiny 0 bytes
	I0910 18:35:33.055254   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0910 18:35:33.055719   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0910 18:35:33.055719   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0910 18:35:33.055719   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0910 18:35:33.056909   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem (1708 bytes)
	I0910 18:35:33.057199   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:35:33.057373   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem -> /usr/share/ca-certificates/4724.pem
	I0910 18:35:33.057428   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /usr/share/ca-certificates/47242.pem
	I0910 18:35:33.059265   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:35:33.104096   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0910 18:35:33.143835   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:35:33.186353   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 18:35:33.229061   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0910 18:35:33.271231   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 18:35:33.310221   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:35:33.351760   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 18:35:33.392825   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:35:33.433978   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem --> /usr/share/ca-certificates/4724.pem (1338 bytes)
	I0910 18:35:33.474761   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /usr/share/ca-certificates/47242.pem (1708 bytes)
	I0910 18:35:33.513401   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:35:33.552248   10084 ssh_runner.go:195] Run: openssl version
	I0910 18:35:33.570302   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:35:33.596948   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:35:33.605775   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:35:33.613487   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:35:33.629302   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:35:33.654382   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4724.pem && ln -fs /usr/share/ca-certificates/4724.pem /etc/ssl/certs/4724.pem"
	I0910 18:35:33.680723   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4724.pem
	I0910 18:35:33.686478   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 18:35:33.697449   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4724.pem
	I0910 18:35:33.717447   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4724.pem /etc/ssl/certs/51391683.0"
	I0910 18:35:33.743792   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47242.pem && ln -fs /usr/share/ca-certificates/47242.pem /etc/ssl/certs/47242.pem"
	I0910 18:35:33.769272   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47242.pem
	I0910 18:35:33.776096   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 18:35:33.788264   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47242.pem
	I0910 18:35:33.808819   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47242.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:35:33.835567   10084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:35:33.840842   10084 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 18:35:33.840979   10084 kubeadm.go:392] StartCluster: {Name:ha-301400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:ha-301400 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.216.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:35:33.849744   10084 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 18:35:33.881888   10084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:35:33.907329   10084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:35:33.932261   10084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:35:33.948080   10084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:35:33.948141   10084 kubeadm.go:157] found existing configuration files:
	
	I0910 18:35:33.960542   10084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 18:35:33.978102   10084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:35:33.987549   10084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:35:34.017847   10084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 18:35:34.034159   10084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:35:34.044953   10084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:35:34.074945   10084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 18:35:34.091855   10084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:35:34.099510   10084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:35:34.127044   10084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 18:35:34.142281   10084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:35:34.152069   10084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:35:34.167826   10084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 18:35:34.344030   10084 kubeadm.go:310] W0910 18:35:34.568154    1776 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 18:35:34.345363   10084 kubeadm.go:310] W0910 18:35:34.569114    1776 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 18:35:34.469561   10084 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 18:35:46.751279   10084 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 18:35:46.751344   10084 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 18:35:46.751344   10084 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 18:35:46.751948   10084 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 18:35:46.751948   10084 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 18:35:46.751948   10084 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 18:35:46.756125   10084 out.go:235]   - Generating certificates and keys ...
	I0910 18:35:46.756192   10084 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 18:35:46.756192   10084 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 18:35:46.756712   10084 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 18:35:46.756842   10084 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0910 18:35:46.756842   10084 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0910 18:35:46.757368   10084 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0910 18:35:46.757543   10084 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0910 18:35:46.757543   10084 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-301400 localhost] and IPs [172.31.216.168 127.0.0.1 ::1]
	I0910 18:35:46.757543   10084 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0910 18:35:46.758226   10084 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-301400 localhost] and IPs [172.31.216.168 127.0.0.1 ::1]
	I0910 18:35:46.758226   10084 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 18:35:46.758226   10084 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 18:35:46.758901   10084 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0910 18:35:46.758958   10084 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 18:35:46.758958   10084 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 18:35:46.758958   10084 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 18:35:46.758958   10084 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 18:35:46.759678   10084 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 18:35:46.759949   10084 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 18:35:46.760237   10084 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 18:35:46.760460   10084 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 18:35:46.764579   10084 out.go:235]   - Booting up control plane ...
	I0910 18:35:46.764828   10084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 18:35:46.764964   10084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 18:35:46.765125   10084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 18:35:46.765384   10084 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 18:35:46.765618   10084 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 18:35:46.765765   10084 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 18:35:46.766130   10084 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 18:35:46.766455   10084 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 18:35:46.766677   10084 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002403393s
	I0910 18:35:46.766898   10084 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 18:35:46.767025   10084 kubeadm.go:310] [api-check] The API server is healthy after 6.506476091s
	I0910 18:35:46.767346   10084 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 18:35:46.767717   10084 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 18:35:46.767847   10084 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 18:35:46.768235   10084 kubeadm.go:310] [mark-control-plane] Marking the node ha-301400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 18:35:46.768421   10084 kubeadm.go:310] [bootstrap-token] Using token: bhmj7u.h9zw5qeczq3tbkjr
	I0910 18:35:46.770241   10084 out.go:235]   - Configuring RBAC rules ...
	I0910 18:35:46.770241   10084 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 18:35:46.771245   10084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 18:35:46.771245   10084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 18:35:46.771245   10084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 18:35:46.771245   10084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 18:35:46.771245   10084 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 18:35:46.772409   10084 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 18:35:46.772537   10084 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 18:35:46.772607   10084 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 18:35:46.772694   10084 kubeadm.go:310] 
	I0910 18:35:46.772694   10084 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 18:35:46.772694   10084 kubeadm.go:310] 
	I0910 18:35:46.772945   10084 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 18:35:46.773017   10084 kubeadm.go:310] 
	I0910 18:35:46.773175   10084 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 18:35:46.773354   10084 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 18:35:46.773501   10084 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 18:35:46.773542   10084 kubeadm.go:310] 
	I0910 18:35:46.773715   10084 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 18:35:46.773786   10084 kubeadm.go:310] 
	I0910 18:35:46.773930   10084 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 18:35:46.773962   10084 kubeadm.go:310] 
	I0910 18:35:46.774058   10084 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 18:35:46.774205   10084 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 18:35:46.774359   10084 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 18:35:46.774359   10084 kubeadm.go:310] 
	I0910 18:35:46.774606   10084 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 18:35:46.774891   10084 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 18:35:46.774972   10084 kubeadm.go:310] 
	I0910 18:35:46.775110   10084 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bhmj7u.h9zw5qeczq3tbkjr \
	I0910 18:35:46.775541   10084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b \
	I0910 18:35:46.775687   10084 kubeadm.go:310] 	--control-plane 
	I0910 18:35:46.775687   10084 kubeadm.go:310] 
	I0910 18:35:46.775824   10084 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 18:35:46.775928   10084 kubeadm.go:310] 
	I0910 18:35:46.776141   10084 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bhmj7u.h9zw5qeczq3tbkjr \
	I0910 18:35:46.776494   10084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b 
	I0910 18:35:46.776539   10084 cni.go:84] Creating CNI manager for ""
	I0910 18:35:46.776539   10084 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0910 18:35:46.781320   10084 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0910 18:35:46.795596   10084 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0910 18:35:46.802679   10084 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0910 18:35:46.802679   10084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0910 18:35:46.852139   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0910 18:35:47.360055   10084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 18:35:47.375182   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:35:47.375182   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-301400 minikube.k8s.io/updated_at=2024_09_10T18_35_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=ha-301400 minikube.k8s.io/primary=true
	I0910 18:35:47.443123   10084 ops.go:34] apiserver oom_adj: -16
	I0910 18:35:47.693785   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:35:48.205844   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:35:48.708548   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:35:49.214699   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:35:49.695389   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:35:50.195868   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:35:50.703317   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:35:51.210202   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:35:51.698886   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:35:51.841318   10084 kubeadm.go:1113] duration metric: took 4.4809609s to wait for elevateKubeSystemPrivileges
	I0910 18:35:51.841480   10084 kubeadm.go:394] duration metric: took 17.9992874s to StartCluster
	I0910 18:35:51.841506   10084 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:35:51.841748   10084 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:35:51.843901   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:35:51.845269   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0910 18:35:51.845269   10084 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.31.216.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 18:35:51.845269   10084 start.go:241] waiting for startup goroutines ...
	I0910 18:35:51.845269   10084 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 18:35:51.845545   10084 addons.go:69] Setting storage-provisioner=true in profile "ha-301400"
	I0910 18:35:51.845545   10084 addons.go:69] Setting default-storageclass=true in profile "ha-301400"
	I0910 18:35:51.845545   10084 addons.go:234] Setting addon storage-provisioner=true in "ha-301400"
	I0910 18:35:51.845545   10084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-301400"
	I0910 18:35:51.845633   10084 host.go:66] Checking if "ha-301400" exists ...
	I0910 18:35:51.846293   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:35:51.846942   10084 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:35:51.849542   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:35:52.043793   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.31.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0910 18:35:52.450549   10084 start.go:971] {"host.minikube.internal": 172.31.208.1} host record injected into CoreDNS's ConfigMap
	I0910 18:35:53.876839   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:35:53.877508   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:53.877508   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:35:53.877508   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:53.878409   10084 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:35:53.879100   10084 kapi.go:59] client config for ha-301400: &rest.Config{Host:"https://172.31.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-301400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-301400\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 18:35:53.880292   10084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:35:53.880369   10084 cert_rotation.go:140] Starting client certificate rotation controller
	I0910 18:35:53.880704   10084 addons.go:234] Setting addon default-storageclass=true in "ha-301400"
	I0910 18:35:53.880774   10084 host.go:66] Checking if "ha-301400" exists ...
	I0910 18:35:53.881844   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:35:53.882468   10084 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 18:35:53.882468   10084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 18:35:53.882468   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:35:55.943846   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:35:55.943846   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:55.943914   10084 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 18:35:55.943914   10084 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 18:35:55.943914   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:35:56.003436   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:35:56.003436   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:56.004057   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:35:57.959888   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:35:57.959888   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:57.959888   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:35:58.391426   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:35:58.392037   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:35:58.392362   10084 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 18:35:58.527832   10084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 18:35:59.541988   10084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.0137212s)
	I0910 18:36:00.261175   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:36:00.261392   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:00.261944   10084 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 18:36:00.399850   10084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 18:36:00.535037   10084 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0910 18:36:00.535037   10084 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0910 18:36:00.535037   10084 round_trippers.go:463] GET https://172.31.223.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0910 18:36:00.535037   10084 round_trippers.go:469] Request Headers:
	I0910 18:36:00.535037   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:36:00.535037   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:36:00.547040   10084 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0910 18:36:00.547784   10084 round_trippers.go:463] PUT https://172.31.223.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0910 18:36:00.547784   10084 round_trippers.go:469] Request Headers:
	I0910 18:36:00.547784   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:36:00.547784   10084 round_trippers.go:473]     Content-Type: application/json
	I0910 18:36:00.547784   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:36:00.550337   10084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:36:00.555011   10084 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0910 18:36:00.557571   10084 addons.go:510] duration metric: took 8.7117141s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0910 18:36:00.557571   10084 start.go:246] waiting for cluster config update ...
	I0910 18:36:00.557571   10084 start.go:255] writing updated cluster config ...
	I0910 18:36:00.560558   10084 out.go:201] 
	I0910 18:36:00.576139   10084 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:36:00.576139   10084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json ...
	I0910 18:36:00.581161   10084 out.go:177] * Starting "ha-301400-m02" control-plane node in "ha-301400" cluster
	I0910 18:36:00.585141   10084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 18:36:00.585871   10084 cache.go:56] Caching tarball of preloaded images
	I0910 18:36:00.585871   10084 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0910 18:36:00.586411   10084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 18:36:00.586529   10084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json ...
	I0910 18:36:00.590867   10084 start.go:360] acquireMachinesLock for ha-301400-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:36:00.591493   10084 start.go:364] duration metric: took 544.9µs to acquireMachinesLock for "ha-301400-m02"
	I0910 18:36:00.591632   10084 start.go:93] Provisioning new machine with config: &{Name:ha-301400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.0 ClusterName:ha-301400 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.216.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 18:36:00.591737   10084 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0910 18:36:00.596945   10084 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 18:36:00.597489   10084 start.go:159] libmachine.API.Create for "ha-301400" (driver="hyperv")
	I0910 18:36:00.597489   10084 client.go:168] LocalClient.Create starting
	I0910 18:36:00.597657   10084 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0910 18:36:00.597995   10084 main.go:141] libmachine: Decoding PEM data...
	I0910 18:36:00.597995   10084 main.go:141] libmachine: Parsing certificate...
	I0910 18:36:00.597995   10084 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0910 18:36:00.597995   10084 main.go:141] libmachine: Decoding PEM data...
	I0910 18:36:00.597995   10084 main.go:141] libmachine: Parsing certificate...
	I0910 18:36:00.597995   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0910 18:36:02.354685   10084 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0910 18:36:02.354685   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:02.354984   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0910 18:36:03.931473   10084 main.go:141] libmachine: [stdout =====>] : False
	
	I0910 18:36:03.931473   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:03.931678   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0910 18:36:05.302753   10084 main.go:141] libmachine: [stdout =====>] : True
	
	I0910 18:36:05.302753   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:05.303074   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0910 18:36:08.547454   10084 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0910 18:36:08.548202   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:08.550444   10084 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 18:36:08.909602   10084 main.go:141] libmachine: Creating SSH key...
	I0910 18:36:09.024917   10084 main.go:141] libmachine: Creating VM...
	I0910 18:36:09.024917   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0910 18:36:11.579530   10084 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0910 18:36:11.579530   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:11.579958   10084 main.go:141] libmachine: Using switch "Default Switch"
	I0910 18:36:11.580146   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0910 18:36:13.167653   10084 main.go:141] libmachine: [stdout =====>] : True
	
	I0910 18:36:13.167653   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:13.167653   10084 main.go:141] libmachine: Creating VHD
	I0910 18:36:13.167653   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0910 18:36:16.682065   10084 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 814E0FBE-6EB7-4615-9B41-C4E05E66854C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0910 18:36:16.682808   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:16.682808   10084 main.go:141] libmachine: Writing magic tar header
	I0910 18:36:16.682808   10084 main.go:141] libmachine: Writing SSH key tar header
	I0910 18:36:16.692879   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0910 18:36:19.601181   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:19.601181   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:19.601249   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\disk.vhd' -SizeBytes 20000MB
	I0910 18:36:21.924335   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:21.924335   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:21.924967   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-301400-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0910 18:36:25.171518   10084 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-301400-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0910 18:36:25.171518   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:25.171518   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-301400-m02 -DynamicMemoryEnabled $false
	I0910 18:36:27.150523   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:27.150523   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:27.151253   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-301400-m02 -Count 2
	I0910 18:36:29.082156   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:29.082156   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:29.082236   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-301400-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\boot2docker.iso'
	I0910 18:36:31.363096   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:31.363096   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:31.363684   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-301400-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\disk.vhd'
	I0910 18:36:33.648824   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:33.648824   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:33.649881   10084 main.go:141] libmachine: Starting VM...
	I0910 18:36:33.649881   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-301400-m02
	I0910 18:36:36.321183   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:36.322292   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:36.322292   10084 main.go:141] libmachine: Waiting for host to start...
	I0910 18:36:36.322292   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:36:38.308004   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:36:38.309046   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:38.309046   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:36:40.534285   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:40.535293   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:41.544482   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:36:43.461742   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:36:43.461742   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:43.461742   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:36:45.669294   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:45.669369   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:46.683041   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:36:48.621635   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:36:48.621635   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:48.621635   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:36:50.842083   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:50.842265   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:51.844086   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:36:53.782081   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:36:53.782081   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:53.783065   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:36:55.969729   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:36:55.969729   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:56.972359   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:36:58.911237   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:36:58.911237   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:36:58.911879   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:01.178646   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:01.178646   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:01.178646   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:03.090108   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:03.091074   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:03.091074   10084 machine.go:93] provisionDockerMachine start ...
	I0910 18:37:03.091074   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:04.991563   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:04.991563   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:04.991925   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:07.235599   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:07.235599   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:07.239473   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:37:07.252765   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.2 22 <nil> <nil>}
	I0910 18:37:07.252765   10084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:37:07.384465   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:37:07.384465   10084 buildroot.go:166] provisioning hostname "ha-301400-m02"
	I0910 18:37:07.384465   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:09.224663   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:09.225415   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:09.225487   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:11.405348   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:11.405348   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:11.409743   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:37:11.410369   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.2 22 <nil> <nil>}
	I0910 18:37:11.410369   10084 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-301400-m02 && echo "ha-301400-m02" | sudo tee /etc/hostname
	I0910 18:37:11.568700   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-301400-m02
	
	I0910 18:37:11.568700   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:13.445214   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:13.445214   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:13.445214   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:15.650522   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:15.650859   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:15.654490   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:37:15.655083   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.2 22 <nil> <nil>}
	I0910 18:37:15.655083   10084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-301400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-301400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-301400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:37:15.795222   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:37:15.795256   10084 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0910 18:37:15.795311   10084 buildroot.go:174] setting up certificates
	I0910 18:37:15.795345   10084 provision.go:84] configureAuth start
	I0910 18:37:15.795345   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:17.656433   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:17.656433   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:17.656694   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:19.851896   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:19.851896   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:19.851896   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:21.669271   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:21.669681   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:21.669681   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:23.871072   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:23.871072   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:23.871072   10084 provision.go:143] copyHostCerts
	I0910 18:37:23.871072   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0910 18:37:23.871072   10084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0910 18:37:23.871072   10084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0910 18:37:23.871756   10084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0910 18:37:23.872444   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0910 18:37:23.872444   10084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0910 18:37:23.872444   10084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0910 18:37:23.873022   10084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0910 18:37:23.873214   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0910 18:37:23.873792   10084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0910 18:37:23.873792   10084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0910 18:37:23.873792   10084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0910 18:37:23.874774   10084 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-301400-m02 san=[127.0.0.1 172.31.215.2 ha-301400-m02 localhost minikube]
	I0910 18:37:24.127406   10084 provision.go:177] copyRemoteCerts
	I0910 18:37:24.134401   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:37:24.135401   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:25.970303   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:25.970465   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:25.970465   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:28.225606   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:28.226345   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:28.226505   10084 sshutil.go:53] new ssh client: &{IP:172.31.215.2 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\id_rsa Username:docker}
	I0910 18:37:28.325954   10084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1912691s)
	I0910 18:37:28.325954   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0910 18:37:28.325954   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 18:37:28.367868   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0910 18:37:28.367868   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 18:37:28.411528   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0910 18:37:28.411528   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0910 18:37:28.456947   10084 provision.go:87] duration metric: took 12.6607445s to configureAuth
	I0910 18:37:28.457029   10084 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:37:28.457518   10084 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:37:28.457586   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:30.292757   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:30.292757   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:30.292757   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:32.491164   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:32.491164   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:32.496919   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:37:32.496919   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.2 22 <nil> <nil>}
	I0910 18:37:32.496919   10084 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 18:37:32.634489   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 18:37:32.634541   10084 buildroot.go:70] root file system type: tmpfs
	I0910 18:37:32.634657   10084 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 18:37:32.634748   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:34.493151   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:34.493227   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:34.493227   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:36.690600   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:36.691031   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:36.694912   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:37:36.695495   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.2 22 <nil> <nil>}
	I0910 18:37:36.695495   10084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.31.216.168"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 18:37:36.851219   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.31.216.168
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 18:37:36.851219   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:38.689860   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:38.689992   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:38.689992   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:40.962337   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:40.962337   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:40.966678   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:37:40.967195   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.2 22 <nil> <nil>}
	I0910 18:37:40.967286   10084 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 18:37:43.154486   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 18:37:43.154569   10084 machine.go:96] duration metric: took 40.0607822s to provisionDockerMachine
	I0910 18:37:43.154615   10084 client.go:171] duration metric: took 1m42.5501876s to LocalClient.Create
	I0910 18:37:43.154615   10084 start.go:167] duration metric: took 1m42.5501876s to libmachine.API.Create "ha-301400"
	I0910 18:37:43.154678   10084 start.go:293] postStartSetup for "ha-301400-m02" (driver="hyperv")
	I0910 18:37:43.154678   10084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:37:43.163469   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:37:43.163469   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:45.054788   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:45.054788   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:45.055560   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:47.328775   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:47.329201   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:47.329539   10084 sshutil.go:53] new ssh client: &{IP:172.31.215.2 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\id_rsa Username:docker}
	I0910 18:37:47.432031   10084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2682198s)
	I0910 18:37:47.440811   10084 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:37:47.447311   10084 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:37:47.447401   10084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0910 18:37:47.447856   10084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0910 18:37:47.449223   10084 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> 47242.pem in /etc/ssl/certs
	I0910 18:37:47.449280   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /etc/ssl/certs/47242.pem
	I0910 18:37:47.458778   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:37:47.475446   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /etc/ssl/certs/47242.pem (1708 bytes)
	I0910 18:37:47.517035   10084 start.go:296] duration metric: took 4.3620618s for postStartSetup
	I0910 18:37:47.519063   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:49.438073   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:49.438141   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:49.438141   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:51.718923   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:51.718923   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:51.719856   10084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json ...
	I0910 18:37:51.724381   10084 start.go:128] duration metric: took 1m51.125124s to createHost
	I0910 18:37:51.724381   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:53.608867   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:53.609863   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:53.610093   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:37:55.818683   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:37:55.818683   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:55.823571   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:37:55.823851   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.2 22 <nil> <nil>}
	I0910 18:37:55.823851   10084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:37:55.956107   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725993476.176233482
	
	I0910 18:37:55.956163   10084 fix.go:216] guest clock: 1725993476.176233482
	I0910 18:37:55.956217   10084 fix.go:229] Guest: 2024-09-10 18:37:56.176233482 +0000 UTC Remote: 2024-09-10 18:37:51.7243813 +0000 UTC m=+293.551507901 (delta=4.451852182s)
	I0910 18:37:55.956282   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:37:57.784323   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:37:57.784323   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:37:57.784405   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:38:00.012292   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:38:00.012292   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:00.016253   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:38:00.016686   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.2 22 <nil> <nil>}
	I0910 18:38:00.016686   10084 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1725993475
	I0910 18:38:00.156374   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Sep 10 18:37:55 UTC 2024
	
	I0910 18:38:00.156374   10084 fix.go:236] clock set: Tue Sep 10 18:37:55 UTC 2024
	 (err=<nil>)
	I0910 18:38:00.156374   10084 start.go:83] releasing machines lock for "ha-301400-m02", held for 1m59.5567897s
	I0910 18:38:00.157100   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:38:02.066216   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:38:02.066216   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:02.066662   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:38:04.308018   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:38:04.308988   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:04.311861   10084 out.go:177] * Found network options:
	I0910 18:38:04.315050   10084 out.go:177]   - NO_PROXY=172.31.216.168
	W0910 18:38:04.317623   10084 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 18:38:04.319363   10084 out.go:177]   - NO_PROXY=172.31.216.168
	W0910 18:38:04.321363   10084 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 18:38:04.322333   10084 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 18:38:04.324338   10084 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0910 18:38:04.324338   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:38:04.331209   10084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0910 18:38:04.331209   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:38:06.240949   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:38:06.240949   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:06.240949   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:38:06.256564   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:38:06.256783   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:06.256783   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 18:38:08.556797   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:38:08.556797   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:08.557667   10084 sshutil.go:53] new ssh client: &{IP:172.31.215.2 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\id_rsa Username:docker}
	I0910 18:38:08.584826   10084 main.go:141] libmachine: [stdout =====>] : 172.31.215.2
	
	I0910 18:38:08.585033   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:08.585414   10084 sshutil.go:53] new ssh client: &{IP:172.31.215.2 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m02\id_rsa Username:docker}
	I0910 18:38:08.649636   10084 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.3181349s)
	W0910 18:38:08.649753   10084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:38:08.662101   10084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:38:08.667518   10084 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.3428851s)
	W0910 18:38:08.667616   10084 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0910 18:38:08.698714   10084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:38:08.698714   10084 start.go:495] detecting cgroup driver to use...
	I0910 18:38:08.699867   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:38:08.741544   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 18:38:08.768959   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 18:38:08.788005   10084 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	W0910 18:38:08.793105   10084 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0910 18:38:08.793174   10084 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0910 18:38:08.798493   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 18:38:08.825532   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 18:38:08.853387   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 18:38:08.880006   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 18:38:08.912139   10084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:38:08.946233   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 18:38:08.974418   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 18:38:09.002644   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 18:38:09.028635   10084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:38:09.053681   10084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:38:09.079657   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:38:09.264390   10084 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 18:38:09.294029   10084 start.go:495] detecting cgroup driver to use...
	I0910 18:38:09.303772   10084 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 18:38:09.331696   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:38:09.358595   10084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:38:09.393007   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:38:09.422563   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 18:38:09.451892   10084 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 18:38:09.518028   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 18:38:09.539741   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:38:09.577573   10084 ssh_runner.go:195] Run: which cri-dockerd
	I0910 18:38:09.591379   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 18:38:09.608979   10084 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 18:38:09.649897   10084 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 18:38:09.826883   10084 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 18:38:10.000014   10084 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 18:38:10.000014   10084 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 18:38:10.038845   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:38:10.216981   10084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 18:38:12.760934   10084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5437805s)
	I0910 18:38:12.769842   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 18:38:12.800491   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 18:38:12.831646   10084 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 18:38:13.039037   10084 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 18:38:13.221980   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:38:13.408088   10084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 18:38:13.446968   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 18:38:13.478243   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:38:13.670441   10084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 18:38:13.768009   10084 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 18:38:13.781020   10084 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 18:38:13.791077   10084 start.go:563] Will wait 60s for crictl version
	I0910 18:38:13.799478   10084 ssh_runner.go:195] Run: which crictl
	I0910 18:38:13.814704   10084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:38:13.864152   10084 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0910 18:38:13.870330   10084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 18:38:13.909779   10084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 18:38:13.944492   10084 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0910 18:38:13.948389   10084 out.go:177]   - env NO_PROXY=172.31.216.168
	I0910 18:38:13.952559   10084 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0910 18:38:13.956612   10084 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0910 18:38:13.956612   10084 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0910 18:38:13.956612   10084 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0910 18:38:13.956612   10084 ip.go:211] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:36:e6 Flags:up|broadcast|multicast|running}
	I0910 18:38:13.958613   10084 ip.go:214] interface addr: fe80::bc85:cd58:cadb:2533/64
	I0910 18:38:13.958613   10084 ip.go:214] interface addr: 172.31.208.1/20
	I0910 18:38:13.966606   10084 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0910 18:38:13.972600   10084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:38:13.994682   10084 mustload.go:65] Loading cluster: ha-301400
	I0910 18:38:13.995020   10084 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:38:13.995617   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:38:15.850580   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:38:15.851580   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:15.851580   10084 host.go:66] Checking if "ha-301400" exists ...
	I0910 18:38:15.851655   10084 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400 for IP: 172.31.215.2
	I0910 18:38:15.852183   10084 certs.go:194] generating shared ca certs ...
	I0910 18:38:15.852183   10084 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:38:15.852674   10084 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0910 18:38:15.852953   10084 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0910 18:38:15.853214   10084 certs.go:256] generating profile certs ...
	I0910 18:38:15.853667   10084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\client.key
	I0910 18:38:15.853736   10084 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.0fcf09ca
	I0910 18:38:15.853736   10084 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.0fcf09ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.216.168 172.31.215.2 172.31.223.254]
	I0910 18:38:15.943090   10084 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.0fcf09ca ...
	I0910 18:38:15.943090   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.0fcf09ca: {Name:mk90cceaad2b5f522282c93ac88ee15814df3b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:38:15.944091   10084 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.0fcf09ca ...
	I0910 18:38:15.944091   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.0fcf09ca: {Name:mk2ace6151b6c6d84e8d5658fc9d86dd8b3b0160 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:38:15.944875   10084 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.0fcf09ca -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt
	I0910 18:38:15.960402   10084 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.0fcf09ca -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key
	I0910 18:38:15.961040   10084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key
	I0910 18:38:15.961040   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 18:38:15.961040   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0910 18:38:15.961794   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 18:38:15.961794   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 18:38:15.961794   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 18:38:15.961794   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 18:38:15.962664   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 18:38:15.962664   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 18:38:15.963276   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem (1338 bytes)
	W0910 18:38:15.963544   10084 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724_empty.pem, impossibly tiny 0 bytes
	I0910 18:38:15.963624   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0910 18:38:15.963624   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0910 18:38:15.963624   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0910 18:38:15.964346   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0910 18:38:15.964567   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem (1708 bytes)
	I0910 18:38:15.964567   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem -> /usr/share/ca-certificates/4724.pem
	I0910 18:38:15.964567   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /usr/share/ca-certificates/47242.pem
	I0910 18:38:15.965178   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:38:15.965366   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:38:17.849116   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:38:17.849206   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:17.849318   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:38:20.112760   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:38:20.112760   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:20.113139   10084 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 18:38:20.214460   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0910 18:38:20.222246   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0910 18:38:20.248316   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0910 18:38:20.254920   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0910 18:38:20.281861   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0910 18:38:20.287796   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0910 18:38:20.315614   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0910 18:38:20.321748   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0910 18:38:20.347275   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0910 18:38:20.354561   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0910 18:38:20.386090   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0910 18:38:20.392140   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0910 18:38:20.415668   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:38:20.463420   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0910 18:38:20.509240   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:38:20.560102   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 18:38:20.601783   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0910 18:38:20.644388   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 18:38:20.683772   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:38:20.732725   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 18:38:20.775054   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem --> /usr/share/ca-certificates/4724.pem (1338 bytes)
	I0910 18:38:20.821547   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /usr/share/ca-certificates/47242.pem (1708 bytes)
	I0910 18:38:20.865075   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:38:20.909022   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0910 18:38:20.942265   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0910 18:38:20.970270   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0910 18:38:20.999013   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0910 18:38:21.030763   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0910 18:38:21.060134   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0910 18:38:21.092915   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0910 18:38:21.132616   10084 ssh_runner.go:195] Run: openssl version
	I0910 18:38:21.149089   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47242.pem && ln -fs /usr/share/ca-certificates/47242.pem /etc/ssl/certs/47242.pem"
	I0910 18:38:21.174779   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47242.pem
	I0910 18:38:21.181189   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 18:38:21.190651   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47242.pem
	I0910 18:38:21.208365   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47242.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:38:21.235703   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:38:21.262463   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:38:21.269162   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:38:21.277268   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:38:21.294222   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:38:21.320003   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4724.pem && ln -fs /usr/share/ca-certificates/4724.pem /etc/ssl/certs/4724.pem"
	I0910 18:38:21.349109   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4724.pem
	I0910 18:38:21.355518   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 18:38:21.362786   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4724.pem
	I0910 18:38:21.377997   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4724.pem /etc/ssl/certs/51391683.0"
	I0910 18:38:21.405179   10084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:38:21.412360   10084 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 18:38:21.412559   10084 kubeadm.go:934] updating node {m02 172.31.215.2 8443 v1.31.0 docker true true} ...
	I0910 18:38:21.412761   10084 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-301400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.215.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-301400 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:38:21.412799   10084 kube-vip.go:115] generating kube-vip config ...
	I0910 18:38:21.420075   10084 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0910 18:38:21.444403   10084 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0910 18:38:21.445400   10084 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.31.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0910 18:38:21.456402   10084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:38:21.472576   10084 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0910 18:38:21.480541   10084 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0910 18:38:21.499478   10084 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubectl
	I0910 18:38:21.500097   10084 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubeadm
	I0910 18:38:21.500097   10084 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubelet
	I0910 18:38:22.547989   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0910 18:38:22.556519   10084 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0910 18:38:22.564079   10084 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0910 18:38:22.564370   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0910 18:38:22.634718   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0910 18:38:22.647722   10084 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0910 18:38:22.683884   10084 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0910 18:38:22.683884   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0910 18:38:22.754891   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:38:22.809195   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0910 18:38:22.818744   10084 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0910 18:38:22.861392   10084 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0910 18:38:22.861392   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0910 18:38:23.748517   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0910 18:38:23.766557   10084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0910 18:38:23.798847   10084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:38:23.838980   10084 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0910 18:38:23.878721   10084 ssh_runner.go:195] Run: grep 172.31.223.254	control-plane.minikube.internal$ /etc/hosts
	I0910 18:38:23.887094   10084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:38:23.916579   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:38:24.107719   10084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:38:24.134651   10084 host.go:66] Checking if "ha-301400" exists ...
	I0910 18:38:24.134811   10084 start.go:317] joinCluster: &{Name:ha-301400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-301400 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.216.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.215.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:38:24.135344   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0910 18:38:24.135447   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:38:25.995070   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:38:25.995190   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:25.995257   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:38:28.248471   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:38:28.248471   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:38:28.249474   10084 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 18:38:28.607517   10084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.4718695s)
	I0910 18:38:28.607677   10084 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.31.215.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 18:38:28.607727   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token iqr4jx.ozrvkw4labd4ob0a --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-301400-m02 --control-plane --apiserver-advertise-address=172.31.215.2 --apiserver-bind-port=8443"
	I0910 18:39:08.896023   10084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token iqr4jx.ozrvkw4labd4ob0a --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-301400-m02 --control-plane --apiserver-advertise-address=172.31.215.2 --apiserver-bind-port=8443": (40.2855412s)
	I0910 18:39:08.896182   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0910 18:39:09.646716   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-301400-m02 minikube.k8s.io/updated_at=2024_09_10T18_39_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=ha-301400 minikube.k8s.io/primary=false
	I0910 18:39:09.812265   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-301400-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0910 18:39:09.960005   10084 start.go:319] duration metric: took 45.8220882s to joinCluster
	I0910 18:39:09.960005   10084 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.31.215.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 18:39:09.960722   10084 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:39:09.963485   10084 out.go:177] * Verifying Kubernetes components...
	I0910 18:39:09.973499   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:39:10.234389   10084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:39:10.263874   10084 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:39:10.264440   10084 kapi.go:59] client config for ha-301400: &rest.Config{Host:"https://172.31.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-301400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-301400\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0910 18:39:10.264619   10084 kubeadm.go:483] Overriding stale ClientConfig host https://172.31.223.254:8443 with https://172.31.216.168:8443
	I0910 18:39:10.265234   10084 node_ready.go:35] waiting up to 6m0s for node "ha-301400-m02" to be "Ready" ...
	I0910 18:39:10.265234   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:10.265234   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:10.265234   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:10.265234   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:10.277642   10084 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0910 18:39:10.775504   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:10.775567   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:10.775567   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:10.775567   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:10.785041   10084 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0910 18:39:11.280994   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:11.281062   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:11.281129   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:11.281129   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:11.284891   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:39:11.771839   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:11.771839   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:11.771839   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:11.771839   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:11.776411   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:12.277262   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:12.277262   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:12.277262   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:12.277262   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:12.283390   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:39:12.284646   10084 node_ready.go:53] node "ha-301400-m02" has status "Ready":"False"
	I0910 18:39:12.771043   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:12.771043   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:12.771124   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:12.771124   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:12.779329   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:39:13.280294   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:13.280330   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:13.280380   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:13.280380   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:13.283555   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:39:13.771254   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:13.771290   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:13.771290   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:13.771290   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:13.774472   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:39:14.281062   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:14.281062   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:14.281062   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:14.281062   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:14.288056   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:39:14.288588   10084 node_ready.go:53] node "ha-301400-m02" has status "Ready":"False"
	I0910 18:39:14.772522   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:14.772522   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:14.772522   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:14.772522   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:14.777458   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:15.278726   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:15.278726   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:15.278726   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:15.278726   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:15.283820   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:15.782324   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:15.782324   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:15.782324   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:15.782324   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:15.828289   10084 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I0910 18:39:16.268974   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:16.268974   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:16.268974   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:16.268974   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:16.273940   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:16.774282   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:16.774466   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:16.774466   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:16.774466   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:16.779348   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:16.780848   10084 node_ready.go:53] node "ha-301400-m02" has status "Ready":"False"
	I0910 18:39:17.268958   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:17.269026   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:17.269026   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:17.269026   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:17.273085   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:17.771930   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:17.771930   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:17.771930   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:17.771930   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:17.776674   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:18.273179   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:18.273268   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:18.273268   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:18.273268   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:18.279766   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:18.773354   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:18.773703   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:18.773703   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:18.773848   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:18.781873   10084 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 18:39:18.782820   10084 node_ready.go:53] node "ha-301400-m02" has status "Ready":"False"
	I0910 18:39:19.271355   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:19.271355   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:19.271355   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:19.271355   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:19.276344   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:19.768270   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:19.768348   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:19.768348   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:19.768348   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:19.772659   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:20.269808   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:20.269808   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:20.269808   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:20.269808   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:20.275358   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:20.769026   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:20.769099   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:20.769099   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:20.769099   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:20.776810   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:39:21.270319   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:21.270391   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:21.270391   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:21.270391   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:21.275506   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:21.276707   10084 node_ready.go:53] node "ha-301400-m02" has status "Ready":"False"
	I0910 18:39:21.772916   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:21.772916   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:21.772916   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:21.773015   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:21.780531   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:39:22.272215   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:22.272312   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:22.272312   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:22.272312   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:22.276863   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:22.770480   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:22.770480   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:22.770480   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:22.770480   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:22.776810   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:39:23.268019   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:23.268207   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:23.268207   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:23.268207   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:23.273404   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:23.770659   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:23.770659   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:23.770659   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:23.770659   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:23.777073   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:39:23.778048   10084 node_ready.go:53] node "ha-301400-m02" has status "Ready":"False"
	I0910 18:39:24.270822   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:24.270822   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:24.270822   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:24.270896   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:24.279724   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:39:24.766978   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:24.767050   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:24.767050   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:24.767123   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:24.772293   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:25.268502   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:25.268502   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:25.268502   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:25.268502   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:25.273760   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:25.772103   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:25.772205   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:25.772289   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:25.772289   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:25.777205   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:26.271285   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:26.271285   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:26.271285   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:26.271285   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:26.274042   10084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:39:26.274797   10084 node_ready.go:53] node "ha-301400-m02" has status "Ready":"False"
	I0910 18:39:26.774154   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:26.774154   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:26.774497   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:26.774497   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:26.779704   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:27.272222   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:27.272296   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:27.272296   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:27.272296   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:27.276692   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:27.772561   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:27.772648   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:27.772648   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:27.772648   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:27.778203   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:28.272355   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:28.272421   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:28.272421   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:28.272499   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:28.280123   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:39:28.281111   10084 node_ready.go:53] node "ha-301400-m02" has status "Ready":"False"
	I0910 18:39:28.769264   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:28.769600   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:28.769654   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:28.769654   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:28.779633   10084 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0910 18:39:29.273383   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:29.273383   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.273494   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.273494   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.277967   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:29.279728   10084 node_ready.go:49] node "ha-301400-m02" has status "Ready":"True"
	I0910 18:39:29.279968   10084 node_ready.go:38] duration metric: took 19.0134462s for node "ha-301400-m02" to be "Ready" ...
	I0910 18:39:29.279968   10084 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:39:29.280258   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
	I0910 18:39:29.280258   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.280370   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.280370   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.286411   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:39:29.299330   10084 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fsbwc" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.300039   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-fsbwc
	I0910 18:39:29.300039   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.300039   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.300039   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.304928   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:29.305770   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:29.305840   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.305840   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.305840   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.309002   10084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:39:29.309272   10084 pod_ready.go:93] pod "coredns-6f6b679f8f-fsbwc" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:29.309272   10084 pod_ready.go:82] duration metric: took 9.9413ms for pod "coredns-6f6b679f8f-fsbwc" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.309814   10084 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ntqxc" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.309946   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ntqxc
	I0910 18:39:29.309946   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.309946   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.309946   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.315353   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:29.316480   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:29.316480   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.316480   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.316480   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.320086   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:39:29.321340   10084 pod_ready.go:93] pod "coredns-6f6b679f8f-ntqxc" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:29.321340   10084 pod_ready.go:82] duration metric: took 11.5253ms for pod "coredns-6f6b679f8f-ntqxc" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.321340   10084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.321440   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400
	I0910 18:39:29.321440   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.321440   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.321440   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.324709   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:39:29.325200   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:29.325736   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.325807   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.325807   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.333701   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:39:29.333701   10084 pod_ready.go:93] pod "etcd-ha-301400" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:29.333701   10084 pod_ready.go:82] duration metric: took 12.3599ms for pod "etcd-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.333701   10084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.334669   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 18:39:29.334669   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.334669   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.334669   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.336922   10084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:39:29.338470   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:29.338524   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.338524   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.338578   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.341488   10084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:39:29.342568   10084 pod_ready.go:93] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:29.342568   10084 pod_ready.go:82] duration metric: took 8.8666ms for pod "etcd-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.342568   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.476159   10084 request.go:632] Waited for 133.0758ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-301400
	I0910 18:39:29.476159   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-301400
	I0910 18:39:29.476159   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.476159   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.476159   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.480967   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:29.679280   10084 request.go:632] Waited for 197.5253ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:29.679280   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:29.679670   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.679670   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.679670   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.686112   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:39:29.687082   10084 pod_ready.go:93] pod "kube-apiserver-ha-301400" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:29.687082   10084 pod_ready.go:82] duration metric: took 344.4898ms for pod "kube-apiserver-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.687082   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:29.882250   10084 request.go:632] Waited for 195.1554ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-301400-m02
	I0910 18:39:29.882250   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-301400-m02
	I0910 18:39:29.882656   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:29.882786   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:29.882820   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:29.887125   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:30.086242   10084 request.go:632] Waited for 197.7879ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:30.086242   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:30.086242   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:30.086242   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:30.086242   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:30.090255   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:30.090667   10084 pod_ready.go:93] pod "kube-apiserver-ha-301400-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:30.090667   10084 pod_ready.go:82] duration metric: took 403.5579ms for pod "kube-apiserver-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:30.090667   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:30.288308   10084 request.go:632] Waited for 197.6277ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-301400
	I0910 18:39:30.288792   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-301400
	I0910 18:39:30.288792   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:30.288792   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:30.288862   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:30.294840   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:30.474574   10084 request.go:632] Waited for 178.3027ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:30.474836   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:30.474836   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:30.474836   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:30.474836   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:30.479202   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:30.480685   10084 pod_ready.go:93] pod "kube-controller-manager-ha-301400" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:30.480800   10084 pod_ready.go:82] duration metric: took 390.1067ms for pod "kube-controller-manager-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:30.480800   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:30.678212   10084 request.go:632] Waited for 197.2728ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-301400-m02
	I0910 18:39:30.678540   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-301400-m02
	I0910 18:39:30.678659   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:30.678659   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:30.678744   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:30.682823   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:30.882228   10084 request.go:632] Waited for 197.974ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:30.882228   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:30.882228   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:30.882228   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:30.882573   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:30.886618   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:30.887416   10084 pod_ready.go:93] pod "kube-controller-manager-ha-301400-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:30.887481   10084 pod_ready.go:82] duration metric: took 406.6537ms for pod "kube-controller-manager-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:30.887494   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hqkvv" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:31.084201   10084 request.go:632] Waited for 196.4573ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqkvv
	I0910 18:39:31.084521   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqkvv
	I0910 18:39:31.084521   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:31.084607   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:31.084607   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:31.092699   10084 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 18:39:31.286276   10084 request.go:632] Waited for 192.3399ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:31.286276   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:31.286276   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:31.286276   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:31.286276   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:31.290834   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:31.292001   10084 pod_ready.go:93] pod "kube-proxy-hqkvv" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:31.292001   10084 pod_ready.go:82] duration metric: took 404.48ms for pod "kube-proxy-hqkvv" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:31.292001   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sh5jk" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:31.474356   10084 request.go:632] Waited for 181.7389ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sh5jk
	I0910 18:39:31.474564   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sh5jk
	I0910 18:39:31.474564   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:31.474657   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:31.474657   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:31.482148   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:39:31.676409   10084 request.go:632] Waited for 193.0379ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:31.676665   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:31.676665   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:31.676665   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:31.676665   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:31.681165   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:31.682091   10084 pod_ready.go:93] pod "kube-proxy-sh5jk" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:31.682091   10084 pod_ready.go:82] duration metric: took 390.0631ms for pod "kube-proxy-sh5jk" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:31.682184   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:31.882147   10084 request.go:632] Waited for 199.9492ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-301400
	I0910 18:39:31.882480   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-301400
	I0910 18:39:31.882480   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:31.882480   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:31.882480   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:31.889949   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:39:32.087052   10084 request.go:632] Waited for 196.2544ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:32.087419   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:39:32.087419   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:32.087419   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:32.087419   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:32.093129   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:32.094360   10084 pod_ready.go:93] pod "kube-scheduler-ha-301400" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:32.094360   10084 pod_ready.go:82] duration metric: took 412.1483ms for pod "kube-scheduler-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:32.094360   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:32.274928   10084 request.go:632] Waited for 179.8087ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-301400-m02
	I0910 18:39:32.274928   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-301400-m02
	I0910 18:39:32.274928   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:32.274928   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:32.275078   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:32.280118   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:32.478906   10084 request.go:632] Waited for 198.2039ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:32.479148   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:39:32.479148   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:32.479148   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:32.479148   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:32.483534   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:32.484701   10084 pod_ready.go:93] pod "kube-scheduler-ha-301400-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 18:39:32.484701   10084 pod_ready.go:82] duration metric: took 390.3144ms for pod "kube-scheduler-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:39:32.484701   10084 pod_ready.go:39] duration metric: took 3.2045162s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:39:32.484701   10084 api_server.go:52] waiting for apiserver process to appear ...
	I0910 18:39:32.493364   10084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:39:32.515794   10084 api_server.go:72] duration metric: took 22.5542622s to wait for apiserver process to appear ...
	I0910 18:39:32.515870   10084 api_server.go:88] waiting for apiserver healthz status ...
	I0910 18:39:32.515954   10084 api_server.go:253] Checking apiserver healthz at https://172.31.216.168:8443/healthz ...
	I0910 18:39:32.526378   10084 api_server.go:279] https://172.31.216.168:8443/healthz returned 200:
	ok
	I0910 18:39:32.527159   10084 round_trippers.go:463] GET https://172.31.216.168:8443/version
	I0910 18:39:32.527223   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:32.527251   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:32.527251   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:32.528469   10084 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 18:39:32.528797   10084 api_server.go:141] control plane version: v1.31.0
	I0910 18:39:32.528820   10084 api_server.go:131] duration metric: took 12.9491ms to wait for apiserver health ...
	I0910 18:39:32.528820   10084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 18:39:32.682761   10084 request.go:632] Waited for 153.9297ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
	I0910 18:39:32.682761   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
	I0910 18:39:32.682761   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:32.682761   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:32.683067   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:32.693720   10084 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0910 18:39:32.701035   10084 system_pods.go:59] 17 kube-system pods found
	I0910 18:39:32.701105   10084 system_pods.go:61] "coredns-6f6b679f8f-fsbwc" [d8424acd-9e8d-42f0-bb05-3c81f13ce95f] Running
	I0910 18:39:32.701105   10084 system_pods.go:61] "coredns-6f6b679f8f-ntqxc" [255790ad-d8ab-404b-8a56-d4b0d4749c5f] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "etcd-ha-301400" [53ecd514-99fb-4e66-990a-1aae70a58578] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "etcd-ha-301400-m02" [53e8cab6-7a5d-481e-8147-034a63ded164] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kindnet-7zqv2" [5b0541a3-83c5-4e1d-8349-dd260cb795ed] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kindnet-jv4nt" [17e701eb-dfb9-4f30-8cd6-817e85ccef65] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kube-apiserver-ha-301400" [8616d1fb-0711-40db-bef3-3f3eb9caf056] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kube-apiserver-ha-301400-m02" [de432e1e-2312-4812-bbee-89731ea35356] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kube-controller-manager-ha-301400" [f59e11f0-96f0-4cb8-878f-9e5076f0a5f5] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kube-controller-manager-ha-301400-m02" [b48f9144-ac88-4139-a35f-0f7d0690821f] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kube-proxy-hqkvv" [f278de26-1d40-4bff-8681-779d4641ac03] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kube-proxy-sh5jk" [18240255-e421-4430-b4bb-49ff7b5430d8] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kube-scheduler-ha-301400" [08507d01-54f8-4cbe-84bb-aa4e06c16520] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kube-scheduler-ha-301400-m02" [3beb7194-4252-49ed-8476-f624db9af5fa] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kube-vip-ha-301400" [fe68ce21-9a06-4d4a-9dc1-d36e76a54ef1] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "kube-vip-ha-301400-m02" [b23f424b-46c2-4f19-9175-d061bb1966e5] Running
	I0910 18:39:32.701134   10084 system_pods.go:61] "storage-provisioner" [9d7be235-eee4-4f13-bc1c-7777806e77f2] Running
	I0910 18:39:32.701134   10084 system_pods.go:74] duration metric: took 172.3021ms to wait for pod list to return data ...
	I0910 18:39:32.701134   10084 default_sa.go:34] waiting for default service account to be created ...
	I0910 18:39:32.884795   10084 request.go:632] Waited for 183.3498ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/default/serviceaccounts
	I0910 18:39:32.885169   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/default/serviceaccounts
	I0910 18:39:32.885169   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:32.885169   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:32.885169   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:32.890535   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:39:32.891098   10084 default_sa.go:45] found service account: "default"
	I0910 18:39:32.891227   10084 default_sa.go:55] duration metric: took 190.08ms for default service account to be created ...
	I0910 18:39:32.891227   10084 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 18:39:33.085149   10084 request.go:632] Waited for 193.9086ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
	I0910 18:39:33.085436   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
	I0910 18:39:33.085436   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:33.085436   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:33.085436   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:33.092435   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:39:33.101623   10084 system_pods.go:86] 17 kube-system pods found
	I0910 18:39:33.101623   10084 system_pods.go:89] "coredns-6f6b679f8f-fsbwc" [d8424acd-9e8d-42f0-bb05-3c81f13ce95f] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "coredns-6f6b679f8f-ntqxc" [255790ad-d8ab-404b-8a56-d4b0d4749c5f] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "etcd-ha-301400" [53ecd514-99fb-4e66-990a-1aae70a58578] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "etcd-ha-301400-m02" [53e8cab6-7a5d-481e-8147-034a63ded164] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kindnet-7zqv2" [5b0541a3-83c5-4e1d-8349-dd260cb795ed] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kindnet-jv4nt" [17e701eb-dfb9-4f30-8cd6-817e85ccef65] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kube-apiserver-ha-301400" [8616d1fb-0711-40db-bef3-3f3eb9caf056] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kube-apiserver-ha-301400-m02" [de432e1e-2312-4812-bbee-89731ea35356] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kube-controller-manager-ha-301400" [f59e11f0-96f0-4cb8-878f-9e5076f0a5f5] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kube-controller-manager-ha-301400-m02" [b48f9144-ac88-4139-a35f-0f7d0690821f] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kube-proxy-hqkvv" [f278de26-1d40-4bff-8681-779d4641ac03] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kube-proxy-sh5jk" [18240255-e421-4430-b4bb-49ff7b5430d8] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kube-scheduler-ha-301400" [08507d01-54f8-4cbe-84bb-aa4e06c16520] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kube-scheduler-ha-301400-m02" [3beb7194-4252-49ed-8476-f624db9af5fa] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kube-vip-ha-301400" [fe68ce21-9a06-4d4a-9dc1-d36e76a54ef1] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "kube-vip-ha-301400-m02" [b23f424b-46c2-4f19-9175-d061bb1966e5] Running
	I0910 18:39:33.101623   10084 system_pods.go:89] "storage-provisioner" [9d7be235-eee4-4f13-bc1c-7777806e77f2] Running
	I0910 18:39:33.101623   10084 system_pods.go:126] duration metric: took 210.3817ms to wait for k8s-apps to be running ...
	I0910 18:39:33.101623   10084 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 18:39:33.109254   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:39:33.131944   10084 system_svc.go:56] duration metric: took 30.3186ms WaitForService to wait for kubelet
	I0910 18:39:33.132490   10084 kubeadm.go:582] duration metric: took 23.1709166s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:39:33.132490   10084 node_conditions.go:102] verifying NodePressure condition ...
	I0910 18:39:33.274656   10084 request.go:632] Waited for 142.0381ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes
	I0910 18:39:33.274656   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes
	I0910 18:39:33.274656   10084 round_trippers.go:469] Request Headers:
	I0910 18:39:33.274656   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:39:33.274656   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:39:33.279306   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:39:33.281541   10084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:39:33.281665   10084 node_conditions.go:123] node cpu capacity is 2
	I0910 18:39:33.281665   10084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:39:33.281665   10084 node_conditions.go:123] node cpu capacity is 2
	I0910 18:39:33.281665   10084 node_conditions.go:105] duration metric: took 149.1645ms to run NodePressure ...
	I0910 18:39:33.281665   10084 start.go:241] waiting for startup goroutines ...
	I0910 18:39:33.281665   10084 start.go:255] writing updated cluster config ...
	I0910 18:39:33.285594   10084 out.go:201] 
	I0910 18:39:33.304473   10084 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:39:33.305075   10084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json ...
	I0910 18:39:33.313474   10084 out.go:177] * Starting "ha-301400-m03" control-plane node in "ha-301400" cluster
	I0910 18:39:33.315943   10084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 18:39:33.315943   10084 cache.go:56] Caching tarball of preloaded images
	I0910 18:39:33.316551   10084 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0910 18:39:33.316551   10084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 18:39:33.316551   10084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json ...
	I0910 18:39:33.324007   10084 start.go:360] acquireMachinesLock for ha-301400-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:39:33.324563   10084 start.go:364] duration metric: took 555.7µs to acquireMachinesLock for "ha-301400-m03"
	I0910 18:39:33.324563   10084 start.go:93] Provisioning new machine with config: &{Name:ha-301400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.0 ClusterName:ha-301400 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.216.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.215.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 18:39:33.324563   10084 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0910 18:39:33.328477   10084 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 18:39:33.328477   10084 start.go:159] libmachine.API.Create for "ha-301400" (driver="hyperv")
	I0910 18:39:33.328477   10084 client.go:168] LocalClient.Create starting
	I0910 18:39:33.329068   10084 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0910 18:39:33.329068   10084 main.go:141] libmachine: Decoding PEM data...
	I0910 18:39:33.329068   10084 main.go:141] libmachine: Parsing certificate...
	I0910 18:39:33.329068   10084 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0910 18:39:33.329068   10084 main.go:141] libmachine: Decoding PEM data...
	I0910 18:39:33.329604   10084 main.go:141] libmachine: Parsing certificate...
	I0910 18:39:33.329677   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0910 18:39:35.052237   10084 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0910 18:39:35.052237   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:35.052312   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0910 18:39:36.647086   10084 main.go:141] libmachine: [stdout =====>] : False
	
	I0910 18:39:36.647086   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:36.647086   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0910 18:39:37.981223   10084 main.go:141] libmachine: [stdout =====>] : True
	
	I0910 18:39:37.981223   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:37.981298   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0910 18:39:41.215759   10084 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0910 18:39:41.215759   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:41.217373   10084 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 18:39:41.550301   10084 main.go:141] libmachine: Creating SSH key...
	I0910 18:39:41.880191   10084 main.go:141] libmachine: Creating VM...
	I0910 18:39:41.880191   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0910 18:39:44.437109   10084 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0910 18:39:44.437934   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:44.438090   10084 main.go:141] libmachine: Using switch "Default Switch"
	I0910 18:39:44.438151   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0910 18:39:46.048089   10084 main.go:141] libmachine: [stdout =====>] : True
	
	I0910 18:39:46.048954   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:46.048954   10084 main.go:141] libmachine: Creating VHD
	I0910 18:39:46.049029   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0910 18:39:49.415060   10084 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 523E1A95-D394-4013-A855-FDAC20B554DD
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0910 18:39:49.415060   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:49.415060   10084 main.go:141] libmachine: Writing magic tar header
	I0910 18:39:49.415060   10084 main.go:141] libmachine: Writing SSH key tar header
	I0910 18:39:49.426432   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0910 18:39:52.371873   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:39:52.371873   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:52.372607   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\disk.vhd' -SizeBytes 20000MB
	I0910 18:39:54.701739   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:39:54.701898   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:54.701898   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-301400-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0910 18:39:57.939419   10084 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-301400-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0910 18:39:57.939419   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:57.939419   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-301400-m03 -DynamicMemoryEnabled $false
	I0910 18:39:59.960646   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:39:59.961214   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:39:59.961214   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-301400-m03 -Count 2
	I0910 18:40:01.942861   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:40:01.942861   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:01.943516   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-301400-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\boot2docker.iso'
	I0910 18:40:04.274717   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:40:04.274949   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:04.274949   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-301400-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\disk.vhd'
	I0910 18:40:06.665497   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:40:06.665566   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:06.665566   10084 main.go:141] libmachine: Starting VM...
	I0910 18:40:06.665566   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-301400-m03
	I0910 18:40:09.503182   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:40:09.503182   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:09.503182   10084 main.go:141] libmachine: Waiting for host to start...
	I0910 18:40:09.503182   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:11.518957   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:11.519896   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:11.519968   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:40:13.738591   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:40:13.738591   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:14.750940   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:16.715340   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:16.716227   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:16.716305   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:40:18.975398   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:40:18.976174   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:19.989830   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:21.979637   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:21.979637   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:21.979637   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:40:24.250594   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:40:24.250594   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:25.266288   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:27.238262   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:27.238262   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:27.238262   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:40:29.457264   10084 main.go:141] libmachine: [stdout =====>] : 
	I0910 18:40:29.457264   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:30.458560   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:32.453380   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:32.454104   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:32.454104   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:40:34.854253   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:40:34.854447   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:34.854514   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:36.812352   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:36.812405   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:36.812405   10084 machine.go:93] provisionDockerMachine start ...
	I0910 18:40:36.812405   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:38.743323   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:38.744202   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:38.744202   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:40:41.031895   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:40:41.031944   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:41.035393   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:40:41.049326   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.217.146 22 <nil> <nil>}
	I0910 18:40:41.049388   10084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:40:41.171928   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:40:41.171928   10084 buildroot.go:166] provisioning hostname "ha-301400-m03"
	I0910 18:40:41.171928   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:43.060959   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:43.060959   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:43.060959   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:40:45.291389   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:40:45.291389   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:45.295379   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:40:45.296010   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.217.146 22 <nil> <nil>}
	I0910 18:40:45.296010   10084 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-301400-m03 && echo "ha-301400-m03" | sudo tee /etc/hostname
	I0910 18:40:45.441242   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-301400-m03
	
	I0910 18:40:45.441319   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:47.322336   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:47.322336   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:47.322336   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:40:49.614787   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:40:49.614787   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:49.619971   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:40:49.619971   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.217.146 22 <nil> <nil>}
	I0910 18:40:49.620496   10084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-301400-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-301400-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-301400-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:40:49.757536   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:40:49.757536   10084 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0910 18:40:49.757536   10084 buildroot.go:174] setting up certificates
	I0910 18:40:49.757536   10084 provision.go:84] configureAuth start
	I0910 18:40:49.757536   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:51.691806   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:51.691935   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:51.692012   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:40:53.997803   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:40:53.997803   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:53.998086   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:40:55.956618   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:40:55.956618   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:55.956732   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:40:58.225854   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:40:58.226699   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:40:58.226699   10084 provision.go:143] copyHostCerts
	I0910 18:40:58.226829   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0910 18:40:58.227173   10084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0910 18:40:58.227173   10084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0910 18:40:58.227173   10084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0910 18:40:58.228629   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0910 18:40:58.228629   10084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0910 18:40:58.229167   10084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0910 18:40:58.229448   10084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0910 18:40:58.230495   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0910 18:40:58.230495   10084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0910 18:40:58.230495   10084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0910 18:40:58.231192   10084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0910 18:40:58.231926   10084 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-301400-m03 san=[127.0.0.1 172.31.217.146 ha-301400-m03 localhost minikube]
	I0910 18:40:58.469524   10084 provision.go:177] copyRemoteCerts
	I0910 18:40:58.477778   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:40:58.477778   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:00.357497   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:00.357581   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:00.357698   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:02.640198   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:02.640198   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:02.640519   10084 sshutil.go:53] new ssh client: &{IP:172.31.217.146 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\id_rsa Username:docker}
	I0910 18:41:02.746080   10084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2680139s)
	I0910 18:41:02.746080   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0910 18:41:02.746407   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 18:41:02.791824   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0910 18:41:02.792115   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 18:41:02.838513   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0910 18:41:02.838513   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0910 18:41:02.881135   10084 provision.go:87] duration metric: took 13.1227128s to configureAuth
	I0910 18:41:02.881180   10084 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:41:02.881211   10084 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:41:02.881747   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:04.820005   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:04.820680   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:04.820680   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:07.159900   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:07.159900   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:07.165491   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:41:07.165643   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.217.146 22 <nil> <nil>}
	I0910 18:41:07.165643   10084 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 18:41:07.292663   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 18:41:07.292663   10084 buildroot.go:70] root file system type: tmpfs
	I0910 18:41:07.292663   10084 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 18:41:07.292663   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:09.201150   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:09.201150   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:09.201560   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:11.503247   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:11.503247   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:11.507856   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:41:11.508276   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.217.146 22 <nil> <nil>}
	I0910 18:41:11.508276   10084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.31.216.168"
	Environment="NO_PROXY=172.31.216.168,172.31.215.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 18:41:11.661740   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.31.216.168
	Environment=NO_PROXY=172.31.216.168,172.31.215.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 18:41:11.662264   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:13.583429   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:13.583429   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:13.584568   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:15.898826   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:15.898826   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:15.902553   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:41:15.902795   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.217.146 22 <nil> <nil>}
	I0910 18:41:15.902795   10084 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 18:41:18.095377   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 18:41:18.095377   10084 machine.go:96] duration metric: took 41.2801836s to provisionDockerMachine
	I0910 18:41:18.095377   10084 client.go:171] duration metric: took 1m44.7598153s to LocalClient.Create
	I0910 18:41:18.095377   10084 start.go:167] duration metric: took 1m44.7598153s to libmachine.API.Create "ha-301400"
	I0910 18:41:18.095377   10084 start.go:293] postStartSetup for "ha-301400-m03" (driver="hyperv")
	I0910 18:41:18.095377   10084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:41:18.105666   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:41:18.105666   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:20.015100   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:20.015100   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:20.015985   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:22.335910   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:22.336618   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:22.336668   10084 sshutil.go:53] new ssh client: &{IP:172.31.217.146 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\id_rsa Username:docker}
	I0910 18:41:22.434464   10084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3285051s)
	I0910 18:41:22.443469   10084 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:41:22.450856   10084 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:41:22.450856   10084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0910 18:41:22.450856   10084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0910 18:41:22.451504   10084 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> 47242.pem in /etc/ssl/certs
	I0910 18:41:22.451504   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /etc/ssl/certs/47242.pem
	I0910 18:41:22.461507   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:41:22.479988   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /etc/ssl/certs/47242.pem (1708 bytes)
	I0910 18:41:22.522918   10084 start.go:296] duration metric: took 4.4272421s for postStartSetup
	I0910 18:41:22.525121   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:24.423763   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:24.424572   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:24.424654   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:26.732650   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:26.732650   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:26.733647   10084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\config.json ...
	I0910 18:41:26.735438   10084 start.go:128] duration metric: took 1m53.4032081s to createHost
	I0910 18:41:26.735623   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:28.662368   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:28.663230   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:28.663320   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:30.987273   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:30.987273   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:30.991376   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:41:30.991759   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.217.146 22 <nil> <nil>}
	I0910 18:41:30.991825   10084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:41:31.125190   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725993691.342270655
	
	I0910 18:41:31.125266   10084 fix.go:216] guest clock: 1725993691.342270655
	I0910 18:41:31.125266   10084 fix.go:229] Guest: 2024-09-10 18:41:31.342270655 +0000 UTC Remote: 2024-09-10 18:41:26.7355524 +0000 UTC m=+508.548126501 (delta=4.606718255s)
	I0910 18:41:31.125266   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:33.026036   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:33.026960   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:33.026960   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:35.304095   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:35.304095   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:35.308690   10084 main.go:141] libmachine: Using SSH client type: native
	I0910 18:41:35.309069   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.217.146 22 <nil> <nil>}
	I0910 18:41:35.309171   10084 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1725993691
	I0910 18:41:35.446904   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Sep 10 18:41:31 UTC 2024
	
	I0910 18:41:35.446904   10084 fix.go:236] clock set: Tue Sep 10 18:41:31 UTC 2024
	 (err=<nil>)
	I0910 18:41:35.446904   10084 start.go:83] releasing machines lock for "ha-301400-m03", held for 2m2.114086s
	I0910 18:41:35.447527   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:37.381658   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:37.381658   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:37.381727   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:39.674835   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:39.675787   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:39.678119   10084 out.go:177] * Found network options:
	I0910 18:41:39.680867   10084 out.go:177]   - NO_PROXY=172.31.216.168,172.31.215.2
	W0910 18:41:39.682790   10084 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 18:41:39.682790   10084 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 18:41:39.687097   10084 out.go:177]   - NO_PROXY=172.31.216.168,172.31.215.2
	W0910 18:41:39.689258   10084 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 18:41:39.689293   10084 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 18:41:39.690592   10084 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 18:41:39.690662   10084 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 18:41:39.692319   10084 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0910 18:41:39.692319   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:39.698917   10084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0910 18:41:39.699926   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:41:41.643366   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:41.643366   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:41.643366   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:41.660959   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:41.661049   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:41.661049   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:43.985673   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:43.985854   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:43.986241   10084 sshutil.go:53] new ssh client: &{IP:172.31.217.146 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\id_rsa Username:docker}
	I0910 18:41:44.005360   10084 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:41:44.005360   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:44.005360   10084 sshutil.go:53] new ssh client: &{IP:172.31.217.146 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\id_rsa Username:docker}
	I0910 18:41:44.081008   10084 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.381796s)
	W0910 18:41:44.081077   10084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:41:44.090033   10084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:41:44.093187   10084 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.4005715s)
	W0910 18:41:44.093187   10084 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0910 18:41:44.124843   10084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:41:44.124843   10084 start.go:495] detecting cgroup driver to use...
	I0910 18:41:44.124843   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:41:44.169828   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 18:41:44.200526   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0910 18:41:44.216363   10084 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0910 18:41:44.216744   10084 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0910 18:41:44.225386   10084 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 18:41:44.233014   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 18:41:44.260076   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 18:41:44.287226   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 18:41:44.313413   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 18:41:44.339402   10084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:41:44.365816   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 18:41:44.393641   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 18:41:44.419167   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 18:41:44.448391   10084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:41:44.478781   10084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:41:44.509113   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:41:44.697178   10084 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 18:41:44.730640   10084 start.go:495] detecting cgroup driver to use...
	I0910 18:41:44.742597   10084 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 18:41:44.777928   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:41:44.807962   10084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:41:44.840792   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:41:44.875486   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 18:41:44.905166   10084 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 18:41:44.964114   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 18:41:44.989184   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:41:45.040806   10084 ssh_runner.go:195] Run: which cri-dockerd
	I0910 18:41:45.054585   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 18:41:45.072888   10084 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 18:41:45.110566   10084 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 18:41:45.296153   10084 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 18:41:45.488512   10084 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 18:41:45.488637   10084 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 18:41:45.540709   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:41:45.737200   10084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 18:41:48.304284   10084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5669113s)
	I0910 18:41:48.312795   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 18:41:48.344320   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 18:41:48.375059   10084 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 18:41:48.563151   10084 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 18:41:48.747239   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:41:48.930629   10084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 18:41:48.966905   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 18:41:48.997827   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:41:49.185037   10084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 18:41:49.285589   10084 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 18:41:49.294834   10084 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 18:41:49.304102   10084 start.go:563] Will wait 60s for crictl version
	I0910 18:41:49.312924   10084 ssh_runner.go:195] Run: which crictl
	I0910 18:41:49.327550   10084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:41:49.375848   10084 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0910 18:41:49.383879   10084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 18:41:49.423090   10084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 18:41:49.459325   10084 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0910 18:41:49.461833   10084 out.go:177]   - env NO_PROXY=172.31.216.168
	I0910 18:41:49.464026   10084 out.go:177]   - env NO_PROXY=172.31.216.168,172.31.215.2
	I0910 18:41:49.466623   10084 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0910 18:41:49.469983   10084 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0910 18:41:49.469983   10084 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0910 18:41:49.469983   10084 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0910 18:41:49.469983   10084 ip.go:211] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:36:e6 Flags:up|broadcast|multicast|running}
	I0910 18:41:49.472745   10084 ip.go:214] interface addr: fe80::bc85:cd58:cadb:2533/64
	I0910 18:41:49.472745   10084 ip.go:214] interface addr: 172.31.208.1/20
	I0910 18:41:49.482904   10084 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0910 18:41:49.489292   10084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:41:49.510490   10084 mustload.go:65] Loading cluster: ha-301400
	I0910 18:41:49.511159   10084 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:41:49.511858   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:41:51.374866   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:51.374866   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:51.375019   10084 host.go:66] Checking if "ha-301400" exists ...
	I0910 18:41:51.375712   10084 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400 for IP: 172.31.217.146
	I0910 18:41:51.375789   10084 certs.go:194] generating shared ca certs ...
	I0910 18:41:51.375789   10084 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:41:51.376751   10084 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0910 18:41:51.376751   10084 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0910 18:41:51.377312   10084 certs.go:256] generating profile certs ...
	I0910 18:41:51.377927   10084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\client.key
	I0910 18:41:51.378023   10084 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.65ff3b51
	I0910 18:41:51.378185   10084 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.65ff3b51 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.216.168 172.31.215.2 172.31.217.146 172.31.223.254]
	I0910 18:41:51.650674   10084 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.65ff3b51 ...
	I0910 18:41:51.650674   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.65ff3b51: {Name:mk7e75359aee205b4655795e8f8d7e03cf42ccc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:41:51.651932   10084 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.65ff3b51 ...
	I0910 18:41:51.651932   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.65ff3b51: {Name:mk031be9627b72bc55c6cc69b16f4cca6f9c43f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:41:51.652932   10084 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt.65ff3b51 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt
	I0910 18:41:51.669397   10084 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key.65ff3b51 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key
	I0910 18:41:51.670594   10084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key
	I0910 18:41:51.670594   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 18:41:51.671178   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0910 18:41:51.671234   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 18:41:51.671234   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 18:41:51.671234   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 18:41:51.671234   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 18:41:51.672212   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 18:41:51.672212   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 18:41:51.672212   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem (1338 bytes)
	W0910 18:41:51.672957   10084 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724_empty.pem, impossibly tiny 0 bytes
	I0910 18:41:51.672957   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0910 18:41:51.672957   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0910 18:41:51.673686   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0910 18:41:51.673686   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0910 18:41:51.674380   10084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem (1708 bytes)
	I0910 18:41:51.674380   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:41:51.674380   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem -> /usr/share/ca-certificates/4724.pem
	I0910 18:41:51.674380   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /usr/share/ca-certificates/47242.pem
	I0910 18:41:51.675072   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:41:53.577807   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:41:53.577807   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:53.577807   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:41:55.832608   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:41:55.832608   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:41:55.832772   10084 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 18:41:55.936529   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0910 18:41:55.944410   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0910 18:41:55.973391   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0910 18:41:55.980456   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0910 18:41:56.009680   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0910 18:41:56.017073   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0910 18:41:56.048746   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0910 18:41:56.055716   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0910 18:41:56.082397   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0910 18:41:56.088959   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0910 18:41:56.117919   10084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0910 18:41:56.124068   10084 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0910 18:41:56.143384   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:41:56.189469   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0910 18:41:56.237486   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:41:56.279607   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 18:41:56.323849   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0910 18:41:56.366998   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 18:41:56.412323   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:41:56.458809   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-301400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 18:41:56.505398   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:41:56.553087   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem --> /usr/share/ca-certificates/4724.pem (1338 bytes)
	I0910 18:41:56.599610   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /usr/share/ca-certificates/47242.pem (1708 bytes)
	I0910 18:41:56.646529   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0910 18:41:56.675720   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0910 18:41:56.705034   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0910 18:41:56.733055   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0910 18:41:56.760737   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0910 18:41:56.794394   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0910 18:41:56.824014   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0910 18:41:56.864269   10084 ssh_runner.go:195] Run: openssl version
	I0910 18:41:56.879790   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4724.pem && ln -fs /usr/share/ca-certificates/4724.pem /etc/ssl/certs/4724.pem"
	I0910 18:41:56.906262   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4724.pem
	I0910 18:41:56.912679   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 18:41:56.920684   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4724.pem
	I0910 18:41:56.938518   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4724.pem /etc/ssl/certs/51391683.0"
	I0910 18:41:56.962983   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47242.pem && ln -fs /usr/share/ca-certificates/47242.pem /etc/ssl/certs/47242.pem"
	I0910 18:41:56.990905   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47242.pem
	I0910 18:41:56.996840   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 18:41:57.004842   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47242.pem
	I0910 18:41:57.023411   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47242.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:41:57.054459   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:41:57.085087   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:41:57.092462   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:41:57.100068   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:41:57.116436   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:41:57.143443   10084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:41:57.150789   10084 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 18:41:57.151047   10084 kubeadm.go:934] updating node {m03 172.31.217.146 8443 v1.31.0 docker true true} ...
	I0910 18:41:57.151136   10084 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-301400-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.217.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-301400 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:41:57.151254   10084 kube-vip.go:115] generating kube-vip config ...
	I0910 18:41:57.159238   10084 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0910 18:41:57.183011   10084 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0910 18:41:57.183011   10084 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.31.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0910 18:41:57.195533   10084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:41:57.209857   10084 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0910 18:41:57.217838   10084 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0910 18:41:57.237693   10084 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0910 18:41:57.237693   10084 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0910 18:41:57.237778   10084 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0910 18:41:57.237931   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0910 18:41:57.237931   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0910 18:41:57.249450   10084 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0910 18:41:57.250316   10084 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0910 18:41:57.251315   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:41:57.257908   10084 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0910 18:41:57.257908   10084 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0910 18:41:57.257908   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0910 18:41:57.257908   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0910 18:41:57.297810   10084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0910 18:41:57.308239   10084 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0910 18:41:57.360202   10084 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0910 18:41:57.360202   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0910 18:41:58.351462   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0910 18:41:58.376501   10084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0910 18:41:58.408281   10084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:41:58.438069   10084 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0910 18:41:58.480485   10084 ssh_runner.go:195] Run: grep 172.31.223.254	control-plane.minikube.internal$ /etc/hosts
	I0910 18:41:58.488791   10084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:41:58.519118   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:41:58.702421   10084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:41:58.729434   10084 host.go:66] Checking if "ha-301400" exists ...
	I0910 18:41:58.730430   10084 start.go:317] joinCluster: &{Name:ha-301400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-301400 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.216.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.215.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.31.217.146 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:41:58.730430   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0910 18:41:58.730430   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:42:00.572708   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:42:00.572963   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:42:00.572963   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:42:02.852297   10084 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:42:02.852941   10084 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:42:02.853325   10084 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 18:42:03.039734   10084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.3090139s)
	I0910 18:42:03.039907   10084 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.31.217.146 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 18:42:03.040082   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token craoai.8re530ootiipv2pi --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-301400-m03 --control-plane --apiserver-advertise-address=172.31.217.146 --apiserver-bind-port=8443"
	I0910 18:42:46.915020   10084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token craoai.8re530ootiipv2pi --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-301400-m03 --control-plane --apiserver-advertise-address=172.31.217.146 --apiserver-bind-port=8443": (43.8719044s)
	I0910 18:42:46.915020   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0910 18:42:47.668610   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-301400-m03 minikube.k8s.io/updated_at=2024_09_10T18_42_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=ha-301400 minikube.k8s.io/primary=false
	I0910 18:42:47.827197   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-301400-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0910 18:42:47.991643   10084 start.go:319] duration metric: took 49.2579005s to joinCluster
	I0910 18:42:47.991792   10084 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.31.217.146 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 18:42:47.992104   10084 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:42:47.995199   10084 out.go:177] * Verifying Kubernetes components...
	I0910 18:42:48.006587   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:42:48.399806   10084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:42:48.433709   10084 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:42:48.434256   10084 kapi.go:59] client config for ha-301400: &rest.Config{Host:"https://172.31.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-301400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-301400\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0910 18:42:48.434441   10084 kubeadm.go:483] Overriding stale ClientConfig host https://172.31.223.254:8443 with https://172.31.216.168:8443
	I0910 18:42:48.434705   10084 node_ready.go:35] waiting up to 6m0s for node "ha-301400-m03" to be "Ready" ...
	I0910 18:42:48.435254   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:48.435254   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:48.435254   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:48.435254   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:48.449998   10084 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0910 18:42:48.938449   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:48.938449   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:48.938449   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:48.938449   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:48.944280   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:42:49.444286   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:49.444327   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:49.444327   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:49.444360   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:49.448064   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:42:49.948818   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:49.949009   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:49.949009   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:49.949009   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:49.952794   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:42:50.442409   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:50.442409   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:50.442409   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:50.442409   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:50.445989   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:42:50.452223   10084 node_ready.go:53] node "ha-301400-m03" has status "Ready":"False"
	I0910 18:42:50.948368   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:50.948368   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:50.948368   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:50.948368   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:50.952456   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:42:51.439228   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:51.439228   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:51.439228   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:51.439228   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:51.443496   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:42:51.948573   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:51.948791   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:51.948873   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:51.948873   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:52.180072   10084 round_trippers.go:574] Response Status: 200 OK in 231 milliseconds
	I0910 18:42:52.441824   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:52.441824   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:52.441824   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:52.441824   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:52.446434   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:42:52.951241   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:52.951307   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:52.951307   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:52.951307   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:52.954925   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:42:52.956172   10084 node_ready.go:53] node "ha-301400-m03" has status "Ready":"False"
	I0910 18:42:53.443069   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:53.443149   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:53.443149   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:53.443149   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:53.495391   10084 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0910 18:42:53.944617   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:53.944617   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:53.944617   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:53.944617   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:53.948046   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:42:54.438216   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:54.438216   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:54.438216   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:54.438216   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:54.442787   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:42:54.938768   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:54.938927   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:54.938927   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:54.938927   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:54.942371   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:42:55.439393   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:55.439393   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:55.439393   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:55.439393   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:55.444182   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:42:55.444251   10084 node_ready.go:53] node "ha-301400-m03" has status "Ready":"False"
	I0910 18:42:55.939343   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:55.939518   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:55.939518   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:55.939518   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:55.946814   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:42:56.437562   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:56.437562   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:56.437562   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:56.437562   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:56.441601   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:42:56.942452   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:56.942452   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:56.942452   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:56.942452   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:56.947148   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:42:57.441570   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:57.441570   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:57.441570   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:57.441570   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:57.447031   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:42:57.447949   10084 node_ready.go:53] node "ha-301400-m03" has status "Ready":"False"
	I0910 18:42:57.943054   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:57.943054   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:57.943148   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:57.943148   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:57.946624   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:42:58.440893   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:58.440958   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:58.440958   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:58.440958   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:58.447361   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:42:58.941963   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:58.942019   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:58.942019   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:58.942019   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:58.951106   10084 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0910 18:42:59.442766   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:59.442766   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:59.443067   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:59.443067   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:59.450321   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:42:59.451353   10084 node_ready.go:53] node "ha-301400-m03" has status "Ready":"False"
	I0910 18:42:59.944842   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:42:59.944933   10084 round_trippers.go:469] Request Headers:
	I0910 18:42:59.944933   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:42:59.944933   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:42:59.949666   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:00.442797   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:00.442797   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:00.442797   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:00.442797   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:00.447269   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:43:00.945354   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:00.945377   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:00.945377   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:00.945377   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:00.949543   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:01.450005   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:01.450072   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:01.450072   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:01.450072   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:01.458499   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:43:01.459192   10084 node_ready.go:53] node "ha-301400-m03" has status "Ready":"False"
	I0910 18:43:01.949858   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:01.949858   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:01.949858   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:01.949858   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:01.957069   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:43:02.449716   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:02.449828   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:02.449828   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:02.449828   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:02.454184   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:02.950123   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:02.950123   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:02.950123   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:02.950123   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:02.958112   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:43:03.449438   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:03.449438   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:03.449930   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:03.449952   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:03.456870   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:43:03.948720   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:03.948965   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:03.948965   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:03.948965   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:03.953441   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:03.953804   10084 node_ready.go:53] node "ha-301400-m03" has status "Ready":"False"
	I0910 18:43:04.447656   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:04.447656   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:04.447656   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:04.447656   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:04.453925   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:43:04.947306   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:04.947496   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:04.947496   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:04.947496   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:04.952204   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:43:05.446997   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:05.446997   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:05.446997   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:05.446997   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:05.451537   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:05.947687   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:05.947785   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:05.947785   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:05.947785   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:05.951284   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:43:06.447072   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:06.447136   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:06.447136   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:06.447136   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:06.454913   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:43:06.456506   10084 node_ready.go:53] node "ha-301400-m03" has status "Ready":"False"
	I0910 18:43:06.949058   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:06.949243   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:06.949243   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:06.949243   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:06.954443   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:07.437684   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:07.437745   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:07.437745   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:07.437745   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:07.445135   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:43:07.938198   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:07.938198   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:07.938284   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:07.938284   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:07.942659   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:08.438093   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:08.438093   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:08.438093   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:08.438093   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:08.443072   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:08.951438   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:08.951438   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:08.951438   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:08.951438   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:08.957833   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:43:08.959846   10084 node_ready.go:53] node "ha-301400-m03" has status "Ready":"False"
	I0910 18:43:09.449353   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:09.449584   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.449584   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.449584   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.455269   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:09.456839   10084 node_ready.go:49] node "ha-301400-m03" has status "Ready":"True"
	I0910 18:43:09.456899   10084 node_ready.go:38] duration metric: took 21.0207817s for node "ha-301400-m03" to be "Ready" ...
	I0910 18:43:09.456899   10084 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:43:09.457022   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
	I0910 18:43:09.457093   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.457093   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.457093   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.470016   10084 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0910 18:43:09.482250   10084 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fsbwc" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:09.482849   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-fsbwc
	I0910 18:43:09.482849   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.482849   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.482849   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.486034   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:43:09.487028   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:09.487028   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.487028   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.487028   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.491309   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:09.492944   10084 pod_ready.go:93] pod "coredns-6f6b679f8f-fsbwc" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:09.492944   10084 pod_ready.go:82] duration metric: took 10.6931ms for pod "coredns-6f6b679f8f-fsbwc" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:09.492944   10084 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ntqxc" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:09.493056   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ntqxc
	I0910 18:43:09.493056   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.493056   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.493056   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.496273   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:43:09.497717   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:09.497717   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.497771   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.497771   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.501233   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:43:09.501731   10084 pod_ready.go:93] pod "coredns-6f6b679f8f-ntqxc" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:09.501731   10084 pod_ready.go:82] duration metric: took 8.7863ms for pod "coredns-6f6b679f8f-ntqxc" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:09.501796   10084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:09.501853   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400
	I0910 18:43:09.501853   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.501853   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.501853   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.505228   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:43:09.506217   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:09.506217   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.506217   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.506217   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.510379   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:09.511220   10084 pod_ready.go:93] pod "etcd-ha-301400" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:09.511303   10084 pod_ready.go:82] duration metric: took 9.5061ms for pod "etcd-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:09.511303   10084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:09.511405   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m02
	I0910 18:43:09.511440   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.511440   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.511440   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.515370   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:43:09.516295   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:43:09.516355   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.516355   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.516355   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.518584   10084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 18:43:09.520000   10084 pod_ready.go:93] pod "etcd-ha-301400-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:09.520064   10084 pod_ready.go:82] duration metric: took 8.7604ms for pod "etcd-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:09.520064   10084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-301400-m03" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:09.652630   10084 request.go:632] Waited for 132.2789ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m03
	I0910 18:43:09.652771   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-301400-m03
	I0910 18:43:09.652771   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.652771   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.652771   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.658017   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:09.854574   10084 request.go:632] Waited for 195.5722ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:09.854909   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:09.854909   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:09.854909   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:09.854909   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:09.860632   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:09.861959   10084 pod_ready.go:93] pod "etcd-ha-301400-m03" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:09.861959   10084 pod_ready.go:82] duration metric: took 341.8724ms for pod "etcd-ha-301400-m03" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:09.862035   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:10.058405   10084 request.go:632] Waited for 196.2328ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-301400
	I0910 18:43:10.058514   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-301400
	I0910 18:43:10.058514   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:10.058514   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:10.058514   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:10.067805   10084 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0910 18:43:10.262579   10084 request.go:632] Waited for 193.1122ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:10.262830   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:10.262925   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:10.262925   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:10.262925   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:10.268295   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:10.269294   10084 pod_ready.go:93] pod "kube-apiserver-ha-301400" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:10.269384   10084 pod_ready.go:82] duration metric: took 407.2315ms for pod "kube-apiserver-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:10.269384   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:10.452332   10084 request.go:632] Waited for 182.6066ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-301400-m02
	I0910 18:43:10.452503   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-301400-m02
	I0910 18:43:10.452503   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:10.452503   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:10.452503   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:10.458014   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:10.655134   10084 request.go:632] Waited for 196.1131ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:43:10.655134   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:43:10.655134   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:10.655134   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:10.655134   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:10.659516   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:10.660318   10084 pod_ready.go:93] pod "kube-apiserver-ha-301400-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:10.660318   10084 pod_ready.go:82] duration metric: took 390.9078ms for pod "kube-apiserver-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:10.660318   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-301400-m03" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:10.858945   10084 request.go:632] Waited for 198.6133ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-301400-m03
	I0910 18:43:10.859267   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-301400-m03
	I0910 18:43:10.859267   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:10.859267   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:10.859364   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:10.863812   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:11.063735   10084 request.go:632] Waited for 198.0698ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:11.063735   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:11.063735   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:11.063735   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:11.063735   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:11.069545   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:11.070916   10084 pod_ready.go:93] pod "kube-apiserver-ha-301400-m03" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:11.070916   10084 pod_ready.go:82] duration metric: took 410.5702ms for pod "kube-apiserver-ha-301400-m03" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:11.071051   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:11.250697   10084 request.go:632] Waited for 179.5259ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-301400
	I0910 18:43:11.250796   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-301400
	I0910 18:43:11.250979   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:11.250979   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:11.251012   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:11.254579   10084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 18:43:11.457179   10084 request.go:632] Waited for 200.7345ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:11.457254   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:11.457254   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:11.457254   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:11.457254   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:11.462013   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:11.462700   10084 pod_ready.go:93] pod "kube-controller-manager-ha-301400" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:11.462700   10084 pod_ready.go:82] duration metric: took 391.6225ms for pod "kube-controller-manager-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:11.462700   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:11.659562   10084 request.go:632] Waited for 196.6255ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-301400-m02
	I0910 18:43:11.659677   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-301400-m02
	I0910 18:43:11.659677   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:11.659677   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:11.659811   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:11.664988   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:11.860576   10084 request.go:632] Waited for 194.7892ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:43:11.861171   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:43:11.861171   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:11.861171   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:11.861236   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:11.865930   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:11.866905   10084 pod_ready.go:93] pod "kube-controller-manager-ha-301400-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:11.866905   10084 pod_ready.go:82] duration metric: took 404.178ms for pod "kube-controller-manager-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:11.866905   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-301400-m03" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:12.064099   10084 request.go:632] Waited for 196.9392ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-301400-m03
	I0910 18:43:12.064304   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-301400-m03
	I0910 18:43:12.064304   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:12.064304   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:12.064304   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:12.068710   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:12.251852   10084 request.go:632] Waited for 182.3669ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:12.251852   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:12.252273   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:12.252273   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:12.252273   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:12.258181   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:12.258940   10084 pod_ready.go:93] pod "kube-controller-manager-ha-301400-m03" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:12.258940   10084 pod_ready.go:82] duration metric: took 392.009ms for pod "kube-controller-manager-ha-301400-m03" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:12.258940   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hqkvv" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:12.456116   10084 request.go:632] Waited for 196.9174ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqkvv
	I0910 18:43:12.456310   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqkvv
	I0910 18:43:12.456440   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:12.456440   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:12.456440   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:12.460764   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:12.658377   10084 request.go:632] Waited for 196.2862ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:43:12.658614   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:43:12.658614   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:12.658614   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:12.658614   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:12.662652   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:12.663405   10084 pod_ready.go:93] pod "kube-proxy-hqkvv" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:12.663476   10084 pod_ready.go:82] duration metric: took 404.5083ms for pod "kube-proxy-hqkvv" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:12.663476   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jczrq" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:12.863543   10084 request.go:632] Waited for 199.9413ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jczrq
	I0910 18:43:12.863986   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jczrq
	I0910 18:43:12.863986   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:12.863986   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:12.863986   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:12.869686   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:13.052474   10084 request.go:632] Waited for 181.7834ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:13.052771   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:13.052771   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:13.052771   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:13.052852   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:13.057624   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:13.058940   10084 pod_ready.go:93] pod "kube-proxy-jczrq" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:13.058940   10084 pod_ready.go:82] duration metric: took 395.4371ms for pod "kube-proxy-jczrq" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:13.058940   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sh5jk" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:13.255273   10084 request.go:632] Waited for 196.3202ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sh5jk
	I0910 18:43:13.255273   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sh5jk
	I0910 18:43:13.255273   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:13.255273   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:13.255273   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:13.260456   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:13.456319   10084 request.go:632] Waited for 194.1993ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:13.456610   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:13.456610   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:13.456610   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:13.456610   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:13.473221   10084 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0910 18:43:13.474407   10084 pod_ready.go:93] pod "kube-proxy-sh5jk" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:13.474407   10084 pod_ready.go:82] duration metric: took 415.4401ms for pod "kube-proxy-sh5jk" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:13.474407   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:13.660305   10084 request.go:632] Waited for 185.558ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-301400
	I0910 18:43:13.660684   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-301400
	I0910 18:43:13.660684   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:13.660760   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:13.660760   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:13.667209   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:13.865577   10084 request.go:632] Waited for 197.3168ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:13.865788   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400
	I0910 18:43:13.865788   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:13.865788   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:13.865892   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:13.870632   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:13.872464   10084 pod_ready.go:93] pod "kube-scheduler-ha-301400" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:13.872464   10084 pod_ready.go:82] duration metric: took 398.0301ms for pod "kube-scheduler-ha-301400" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:13.872537   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:14.051329   10084 request.go:632] Waited for 178.3777ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-301400-m02
	I0910 18:43:14.051440   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-301400-m02
	I0910 18:43:14.051440   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:14.051440   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:14.051440   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:14.057197   10084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 18:43:14.253749   10084 request.go:632] Waited for 194.4435ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:43:14.253749   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m02
	I0910 18:43:14.253856   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:14.253856   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:14.253856   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:14.261458   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:43:14.262398   10084 pod_ready.go:93] pod "kube-scheduler-ha-301400-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:14.262398   10084 pod_ready.go:82] duration metric: took 389.8017ms for pod "kube-scheduler-ha-301400-m02" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:14.262398   10084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-301400-m03" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:14.457047   10084 request.go:632] Waited for 194.3858ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-301400-m03
	I0910 18:43:14.457268   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-301400-m03
	I0910 18:43:14.457387   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:14.457387   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:14.457525   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:14.462393   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:14.659961   10084 request.go:632] Waited for 196.1271ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:14.660062   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes/ha-301400-m03
	I0910 18:43:14.660183   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:14.660239   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:14.660239   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:14.664528   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:14.665449   10084 pod_ready.go:93] pod "kube-scheduler-ha-301400-m03" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:14.665449   10084 pod_ready.go:82] duration metric: took 403.0243ms for pod "kube-scheduler-ha-301400-m03" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:14.665449   10084 pod_ready.go:39] duration metric: took 5.2081407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:43:14.665449   10084 api_server.go:52] waiting for apiserver process to appear ...
	I0910 18:43:14.674018   10084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:43:14.700736   10084 api_server.go:72] duration metric: took 26.7070924s to wait for apiserver process to appear ...
	I0910 18:43:14.700736   10084 api_server.go:88] waiting for apiserver healthz status ...
	I0910 18:43:14.700736   10084 api_server.go:253] Checking apiserver healthz at https://172.31.216.168:8443/healthz ...
	I0910 18:43:14.710550   10084 api_server.go:279] https://172.31.216.168:8443/healthz returned 200:
	ok
	I0910 18:43:14.710872   10084 round_trippers.go:463] GET https://172.31.216.168:8443/version
	I0910 18:43:14.710872   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:14.710930   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:14.710930   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:14.711931   10084 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 18:43:14.711931   10084 api_server.go:141] control plane version: v1.31.0
	I0910 18:43:14.711931   10084 api_server.go:131] duration metric: took 11.195ms to wait for apiserver health ...
	I0910 18:43:14.711931   10084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 18:43:14.861690   10084 request.go:632] Waited for 149.6536ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
	I0910 18:43:14.861690   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
	I0910 18:43:14.861690   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:14.861690   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:14.861690   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:14.869437   10084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 18:43:14.877370   10084 system_pods.go:59] 24 kube-system pods found
	I0910 18:43:14.877370   10084 system_pods.go:61] "coredns-6f6b679f8f-fsbwc" [d8424acd-9e8d-42f0-bb05-3c81f13ce95f] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "coredns-6f6b679f8f-ntqxc" [255790ad-d8ab-404b-8a56-d4b0d4749c5f] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "etcd-ha-301400" [53ecd514-99fb-4e66-990a-1aae70a58578] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "etcd-ha-301400-m02" [53e8cab6-7a5d-481e-8147-034a63ded164] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "etcd-ha-301400-m03" [cd530f29-da8a-4b9b-a9c6-a93c637af337] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kindnet-7zqv2" [5b0541a3-83c5-4e1d-8349-dd260cb795ed] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kindnet-c72m2" [c1cda00f-a399-41b6-84a0-083a1e600757] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kindnet-jv4nt" [17e701eb-dfb9-4f30-8cd6-817e85ccef65] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-apiserver-ha-301400" [8616d1fb-0711-40db-bef3-3f3eb9caf056] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-apiserver-ha-301400-m02" [de432e1e-2312-4812-bbee-89731ea35356] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-apiserver-ha-301400-m03" [93058819-1974-4514-ac14-43eabf15c9fc] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-controller-manager-ha-301400" [f59e11f0-96f0-4cb8-878f-9e5076f0a5f5] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-controller-manager-ha-301400-m02" [b48f9144-ac88-4139-a35f-0f7d0690821f] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-controller-manager-ha-301400-m03" [e32de6c3-1398-437f-a942-8f8323461e4a] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-proxy-hqkvv" [f278de26-1d40-4bff-8681-779d4641ac03] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-proxy-jczrq" [d3cf8bce-7a23-4561-b4d5-8bbab4244624] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-proxy-sh5jk" [18240255-e421-4430-b4bb-49ff7b5430d8] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-scheduler-ha-301400" [08507d01-54f8-4cbe-84bb-aa4e06c16520] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-scheduler-ha-301400-m02" [3beb7194-4252-49ed-8476-f624db9af5fa] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-scheduler-ha-301400-m03" [58f0444b-c1f8-4a39-804c-5c05f79010a2] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-vip-ha-301400" [fe68ce21-9a06-4d4a-9dc1-d36e76a54ef1] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-vip-ha-301400-m02" [b23f424b-46c2-4f19-9175-d061bb1966e5] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "kube-vip-ha-301400-m03" [7f449d83-55ba-4467-89d0-abb1d20b4707] Running
	I0910 18:43:14.877370   10084 system_pods.go:61] "storage-provisioner" [9d7be235-eee4-4f13-bc1c-7777806e77f2] Running
	I0910 18:43:14.877370   10084 system_pods.go:74] duration metric: took 165.4275ms to wait for pod list to return data ...
	I0910 18:43:14.877370   10084 default_sa.go:34] waiting for default service account to be created ...
	I0910 18:43:15.049898   10084 request.go:632] Waited for 172.3658ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/default/serviceaccounts
	I0910 18:43:15.049898   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/default/serviceaccounts
	I0910 18:43:15.049898   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:15.049898   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:15.049898   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:15.054389   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:15.055269   10084 default_sa.go:45] found service account: "default"
	I0910 18:43:15.055269   10084 default_sa.go:55] duration metric: took 177.8867ms for default service account to be created ...
	I0910 18:43:15.055269   10084 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 18:43:15.252498   10084 request.go:632] Waited for 197.1081ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
	I0910 18:43:15.252849   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/namespaces/kube-system/pods
	I0910 18:43:15.252849   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:15.252849   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:15.252849   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:15.260537   10084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 18:43:15.272068   10084 system_pods.go:86] 24 kube-system pods found
	I0910 18:43:15.272068   10084 system_pods.go:89] "coredns-6f6b679f8f-fsbwc" [d8424acd-9e8d-42f0-bb05-3c81f13ce95f] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "coredns-6f6b679f8f-ntqxc" [255790ad-d8ab-404b-8a56-d4b0d4749c5f] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "etcd-ha-301400" [53ecd514-99fb-4e66-990a-1aae70a58578] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "etcd-ha-301400-m02" [53e8cab6-7a5d-481e-8147-034a63ded164] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "etcd-ha-301400-m03" [cd530f29-da8a-4b9b-a9c6-a93c637af337] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "kindnet-7zqv2" [5b0541a3-83c5-4e1d-8349-dd260cb795ed] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "kindnet-c72m2" [c1cda00f-a399-41b6-84a0-083a1e600757] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "kindnet-jv4nt" [17e701eb-dfb9-4f30-8cd6-817e85ccef65] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "kube-apiserver-ha-301400" [8616d1fb-0711-40db-bef3-3f3eb9caf056] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "kube-apiserver-ha-301400-m02" [de432e1e-2312-4812-bbee-89731ea35356] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "kube-apiserver-ha-301400-m03" [93058819-1974-4514-ac14-43eabf15c9fc] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "kube-controller-manager-ha-301400" [f59e11f0-96f0-4cb8-878f-9e5076f0a5f5] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "kube-controller-manager-ha-301400-m02" [b48f9144-ac88-4139-a35f-0f7d0690821f] Running
	I0910 18:43:15.272068   10084 system_pods.go:89] "kube-controller-manager-ha-301400-m03" [e32de6c3-1398-437f-a942-8f8323461e4a] Running
	I0910 18:43:15.272605   10084 system_pods.go:89] "kube-proxy-hqkvv" [f278de26-1d40-4bff-8681-779d4641ac03] Running
	I0910 18:43:15.272605   10084 system_pods.go:89] "kube-proxy-jczrq" [d3cf8bce-7a23-4561-b4d5-8bbab4244624] Running
	I0910 18:43:15.272605   10084 system_pods.go:89] "kube-proxy-sh5jk" [18240255-e421-4430-b4bb-49ff7b5430d8] Running
	I0910 18:43:15.272605   10084 system_pods.go:89] "kube-scheduler-ha-301400" [08507d01-54f8-4cbe-84bb-aa4e06c16520] Running
	I0910 18:43:15.272689   10084 system_pods.go:89] "kube-scheduler-ha-301400-m02" [3beb7194-4252-49ed-8476-f624db9af5fa] Running
	I0910 18:43:15.272689   10084 system_pods.go:89] "kube-scheduler-ha-301400-m03" [58f0444b-c1f8-4a39-804c-5c05f79010a2] Running
	I0910 18:43:15.272689   10084 system_pods.go:89] "kube-vip-ha-301400" [fe68ce21-9a06-4d4a-9dc1-d36e76a54ef1] Running
	I0910 18:43:15.272689   10084 system_pods.go:89] "kube-vip-ha-301400-m02" [b23f424b-46c2-4f19-9175-d061bb1966e5] Running
	I0910 18:43:15.272689   10084 system_pods.go:89] "kube-vip-ha-301400-m03" [7f449d83-55ba-4467-89d0-abb1d20b4707] Running
	I0910 18:43:15.272760   10084 system_pods.go:89] "storage-provisioner" [9d7be235-eee4-4f13-bc1c-7777806e77f2] Running
	I0910 18:43:15.272760   10084 system_pods.go:126] duration metric: took 217.477ms to wait for k8s-apps to be running ...
	I0910 18:43:15.272760   10084 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 18:43:15.280537   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:43:15.305384   10084 system_svc.go:56] duration metric: took 32.6218ms WaitForService to wait for kubelet
	I0910 18:43:15.305384   10084 kubeadm.go:582] duration metric: took 27.3117004s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:43:15.306372   10084 node_conditions.go:102] verifying NodePressure condition ...
	I0910 18:43:15.455159   10084 request.go:632] Waited for 148.7523ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.216.168:8443/api/v1/nodes
	I0910 18:43:15.455331   10084 round_trippers.go:463] GET https://172.31.216.168:8443/api/v1/nodes
	I0910 18:43:15.455331   10084 round_trippers.go:469] Request Headers:
	I0910 18:43:15.455331   10084 round_trippers.go:473]     Accept: application/json, */*
	I0910 18:43:15.455331   10084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 18:43:15.459970   10084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 18:43:15.461750   10084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:43:15.461750   10084 node_conditions.go:123] node cpu capacity is 2
	I0910 18:43:15.461822   10084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:43:15.461822   10084 node_conditions.go:123] node cpu capacity is 2
	I0910 18:43:15.461822   10084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:43:15.461888   10084 node_conditions.go:123] node cpu capacity is 2
	I0910 18:43:15.461888   10084 node_conditions.go:105] duration metric: took 155.5052ms to run NodePressure ...
	I0910 18:43:15.461888   10084 start.go:241] waiting for startup goroutines ...
	I0910 18:43:15.461979   10084 start.go:255] writing updated cluster config ...
	I0910 18:43:15.472978   10084 ssh_runner.go:195] Run: rm -f paused
	I0910 18:43:15.596394   10084 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 18:43:15.600316   10084 out.go:177] * Done! kubectl is now configured to use "ha-301400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 10 18:36:14 ha-301400 dockerd[1441]: time="2024-09-10T18:36:14.960365290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:36:15 ha-301400 dockerd[1441]: time="2024-09-10T18:36:15.073016116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:36:15 ha-301400 dockerd[1441]: time="2024-09-10T18:36:15.073305739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:36:15 ha-301400 dockerd[1441]: time="2024-09-10T18:36:15.073350243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:36:15 ha-301400 dockerd[1441]: time="2024-09-10T18:36:15.073477453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:43:49 ha-301400 dockerd[1441]: time="2024-09-10T18:43:49.729840003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:43:49 ha-301400 dockerd[1441]: time="2024-09-10T18:43:49.730778566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:43:49 ha-301400 dockerd[1441]: time="2024-09-10T18:43:49.730812868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:43:49 ha-301400 dockerd[1441]: time="2024-09-10T18:43:49.731085886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:43:49 ha-301400 dockerd[1441]: time="2024-09-10T18:43:49.739896770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:43:49 ha-301400 dockerd[1441]: time="2024-09-10T18:43:49.740031078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:43:49 ha-301400 dockerd[1441]: time="2024-09-10T18:43:49.740119284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:43:49 ha-301400 dockerd[1441]: time="2024-09-10T18:43:49.740357600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:43:49 ha-301400 cri-dockerd[1332]: time="2024-09-10T18:43:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2d41fd59c1d9af29da14430b62618316df34c9c6f383cfa936a194032efb3d41/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 10 18:43:49 ha-301400 cri-dockerd[1332]: time="2024-09-10T18:43:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/47a3e0c452177ecf58bdad369c0621c07eb658c2bfde06683af2f9c11ab8bf4b/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 10 18:43:51 ha-301400 cri-dockerd[1332]: time="2024-09-10T18:43:51Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Sep 10 18:43:51 ha-301400 dockerd[1441]: time="2024-09-10T18:43:51.620315979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:43:51 ha-301400 dockerd[1441]: time="2024-09-10T18:43:51.620428886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:43:51 ha-301400 dockerd[1441]: time="2024-09-10T18:43:51.620447988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:43:51 ha-301400 dockerd[1441]: time="2024-09-10T18:43:51.620595597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:43:51 ha-301400 cri-dockerd[1332]: time="2024-09-10T18:43:51Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Image is up to date for gcr.io/k8s-minikube/busybox:1.28"
	Sep 10 18:43:52 ha-301400 dockerd[1441]: time="2024-09-10T18:43:52.107272350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:43:52 ha-301400 dockerd[1441]: time="2024-09-10T18:43:52.107400058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:43:52 ha-301400 dockerd[1441]: time="2024-09-10T18:43:52.107430460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:43:52 ha-301400 dockerd[1441]: time="2024-09-10T18:43:52.107539667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1ab47a8d691f1       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago      Running             busybox                   0                   47a3e0c452177       busybox-7dff88458-wbkmw
	edc8f4528b757       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago      Running             busybox                   0                   2d41fd59c1d9a       busybox-7dff88458-d2tcx
	34acad6b875b8       cbb01a7bd410d                                                                                         27 minutes ago      Running             coredns                   0                   6873bb9deffdb       coredns-6f6b679f8f-ntqxc
	bea32c778c54a       cbb01a7bd410d                                                                                         27 minutes ago      Running             coredns                   0                   92aa8e8846ef1       coredns-6f6b679f8f-fsbwc
	a32b3328b2f73       6e38f40d628db                                                                                         27 minutes ago      Running             storage-provisioner       0                   b2896be8301c2       storage-provisioner
	2bf8ec4096587       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              28 minutes ago      Running             kindnet-cni               0                   818bfadbd7a45       kindnet-7zqv2
	bed0fdc399e60       ad83b2ca7b09e                                                                                         28 minutes ago      Running             kube-proxy                0                   2e8aa95bd74fb       kube-proxy-sh5jk
	6f1c626fb447e       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     28 minutes ago      Running             kube-vip                  0                   2c1eba63c15e0       kube-vip-ha-301400
	54f16f39e60d6       604f5db92eaa8                                                                                         28 minutes ago      Running             kube-apiserver            0                   78b1face6c00d       kube-apiserver-ha-301400
	0a05e60cd24cd       1766f54c897f0                                                                                         28 minutes ago      Running             kube-scheduler            0                   6e9a59232ceae       kube-scheduler-ha-301400
	43a1ed13d84ac       2e96e5913fc06                                                                                         28 minutes ago      Running             etcd                      0                   a4bc6603ad4bd       etcd-ha-301400
	8285765ba9cc8       045733566833c                                                                                         28 minutes ago      Running             kube-controller-manager   0                   29c399af97983       kube-controller-manager-ha-301400
	
	
	==> coredns [34acad6b875b] <==
	[INFO] 10.244.0.5:54343 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.001863717s
	[INFO] 10.244.0.4:38286 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000365923s
	[INFO] 10.244.0.4:52380 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.034212755s
	[INFO] 10.244.0.4:48204 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000518133s
	[INFO] 10.244.0.4:35151 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000270117s
	[INFO] 10.244.1.2:50300 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.015811596s
	[INFO] 10.244.1.2:39428 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000286118s
	[INFO] 10.244.1.2:48752 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084906s
	[INFO] 10.244.1.2:50945 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000053603s
	[INFO] 10.244.1.2:32923 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060704s
	[INFO] 10.244.0.5:59533 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000396725s
	[INFO] 10.244.0.5:52236 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130708s
	[INFO] 10.244.0.5:51401 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00015141s
	[INFO] 10.244.0.4:33144 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108007s
	[INFO] 10.244.0.4:44342 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000134408s
	[INFO] 10.244.0.4:51342 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000264117s
	[INFO] 10.244.1.2:60885 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093806s
	[INFO] 10.244.0.5:54868 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188812s
	[INFO] 10.244.0.5:60478 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000212314s
	[INFO] 10.244.0.4:54087 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097807s
	[INFO] 10.244.0.4:59433 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.001138172s
	[INFO] 10.244.0.4:35612 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114207s
	[INFO] 10.244.1.2:37333 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000398525s
	[INFO] 10.244.0.5:40458 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000498931s
	[INFO] 10.244.0.5:52873 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000213813s
	
	
	==> coredns [bea32c778c54] <==
	[INFO] 10.244.0.5:33067 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000308319s
	[INFO] 10.244.0.4:35470 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111907s
	[INFO] 10.244.0.4:52824 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.034732888s
	[INFO] 10.244.0.4:40307 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130108s
	[INFO] 10.244.0.4:46203 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117908s
	[INFO] 10.244.1.2:48152 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108007s
	[INFO] 10.244.1.2:46265 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000138809s
	[INFO] 10.244.1.2:60351 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114307s
	[INFO] 10.244.0.5:57021 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000273317s
	[INFO] 10.244.0.5:33161 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014911s
	[INFO] 10.244.0.5:51861 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.002989688s
	[INFO] 10.244.0.5:47423 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000219714s
	[INFO] 10.244.0.5:33162 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000260816s
	[INFO] 10.244.0.4:51899 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093806s
	[INFO] 10.244.1.2:45956 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209913s
	[INFO] 10.244.1.2:40098 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133408s
	[INFO] 10.244.1.2:51724 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081805s
	[INFO] 10.244.0.5:53210 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000219614s
	[INFO] 10.244.0.5:46687 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00015281s
	[INFO] 10.244.0.4:37811 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000207813s
	[INFO] 10.244.1.2:51061 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000625439s
	[INFO] 10.244.1.2:44257 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000199213s
	[INFO] 10.244.1.2:34769 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000157909s
	[INFO] 10.244.0.5:60343 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000288018s
	[INFO] 10.244.0.5:45411 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00015851s
	
	
	==> describe nodes <==
	Name:               ha-301400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-301400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-301400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T18_35_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:35:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-301400
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 19:04:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 18:59:35 +0000   Tue, 10 Sep 2024 18:35:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 18:59:35 +0000   Tue, 10 Sep 2024 18:35:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 18:59:35 +0000   Tue, 10 Sep 2024 18:35:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 18:59:35 +0000   Tue, 10 Sep 2024 18:36:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.216.168
	  Hostname:    ha-301400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 9674892abe054124a29fe78dad3b3ea8
	  System UUID:                a46a0773-6043-5943-b07f-4fc55231d20c
	  Boot ID:                    d1445ab4-da2a-4fe2-a7f4-acb5e9b26c6c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-d2tcx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  default                     busybox-7dff88458-wbkmw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-6f6b679f8f-fsbwc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 coredns-6f6b679f8f-ntqxc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-ha-301400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kindnet-7zqv2                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      28m
	  kube-system                 kube-apiserver-ha-301400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-ha-301400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-sh5jk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-ha-301400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-vip-ha-301400                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28m   kube-proxy       
	  Normal  Starting                 28m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m   kubelet          Node ha-301400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m   kubelet          Node ha-301400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m   kubelet          Node ha-301400 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28m   node-controller  Node ha-301400 event: Registered Node ha-301400 in Controller
	  Normal  NodeReady                27m   kubelet          Node ha-301400 status is now: NodeReady
	  Normal  RegisteredNode           24m   node-controller  Node ha-301400 event: Registered Node ha-301400 in Controller
	  Normal  RegisteredNode           21m   node-controller  Node ha-301400 event: Registered Node ha-301400 in Controller
	
	
	Name:               ha-301400-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-301400-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-301400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T18_39_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:39:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-301400-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 19:04:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 19:02:05 +0000   Tue, 10 Sep 2024 19:02:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 19:02:05 +0000   Tue, 10 Sep 2024 19:02:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 19:02:05 +0000   Tue, 10 Sep 2024 19:02:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 19:02:05 +0000   Tue, 10 Sep 2024 19:02:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.210.72
	  Hostname:    ha-301400-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6dc4f9802b7d4645921b8eea4a2bb13f
	  System UUID:                994f6dcd-0ea0-c641-b486-db3369e52782
	  Boot ID:                    80c036e6-04ee-4945-99a7-920db099b844
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lnwzg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 etcd-ha-301400-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         25m
	  kube-system                 kindnet-jv4nt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      25m
	  kube-system                 kube-apiserver-ha-301400-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-controller-manager-ha-301400-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-proxy-hqkvv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-scheduler-ha-301400-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-vip-ha-301400-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 25m                  kube-proxy       
	  Normal   Starting                 119s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  25m (x8 over 25m)    kubelet          Node ha-301400-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    25m (x8 over 25m)    kubelet          Node ha-301400-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     25m (x7 over 25m)    kubelet          Node ha-301400-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  25m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           25m                  node-controller  Node ha-301400-m02 event: Registered Node ha-301400-m02 in Controller
	  Normal   RegisteredNode           24m                  node-controller  Node ha-301400-m02 event: Registered Node ha-301400-m02 in Controller
	  Normal   RegisteredNode           21m                  node-controller  Node ha-301400-m02 event: Registered Node ha-301400-m02 in Controller
	  Normal   NodeNotReady             4m38s                node-controller  Node ha-301400-m02 status is now: NodeNotReady
	  Normal   Starting                 2m4s                 kubelet          Starting kubelet.
	  Warning  Rebooted                 2m4s                 kubelet          Node ha-301400-m02 has been rebooted, boot id: 80c036e6-04ee-4945-99a7-920db099b844
	  Normal   NodeHasSufficientMemory  2m4s (x2 over 2m4s)  kubelet          Node ha-301400-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m4s (x2 over 2m4s)  kubelet          Node ha-301400-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m4s (x2 over 2m4s)  kubelet          Node ha-301400-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                2m4s                 kubelet          Node ha-301400-m02 status is now: NodeReady
	  Normal   NodeAllocatableEnforced  2m4s                 kubelet          Updated Node Allocatable limit across pods
	
	
	Name:               ha-301400-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-301400-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-301400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T18_42_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:42:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-301400-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 19:04:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 19:03:37 +0000   Tue, 10 Sep 2024 18:42:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 19:03:37 +0000   Tue, 10 Sep 2024 18:42:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 19:03:37 +0000   Tue, 10 Sep 2024 18:42:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 19:03:37 +0000   Tue, 10 Sep 2024 18:43:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.217.146
	  Hostname:    ha-301400-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 61aa00a070b7471daf3aa526379a1306
	  System UUID:                487dcf97-6c67-3d48-9807-df89639a7980
	  Boot ID:                    8b928b11-5293-478d-ad30-080a773cea30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-301400-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kindnet-c72m2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	  kube-system                 kube-apiserver-ha-301400-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-301400-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-jczrq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-301400-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-301400-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node ha-301400-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node ha-301400-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node ha-301400-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node ha-301400-m03 event: Registered Node ha-301400-m03 in Controller
	  Normal  RegisteredNode           21m                node-controller  Node ha-301400-m03 event: Registered Node ha-301400-m03 in Controller
	  Normal  RegisteredNode           21m                node-controller  Node ha-301400-m03 event: Registered Node ha-301400-m03 in Controller
	
	
	Name:               ha-301400-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-301400-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-301400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T18_47_42_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:47:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-301400-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 19:04:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 19:03:29 +0000   Tue, 10 Sep 2024 18:47:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 19:03:29 +0000   Tue, 10 Sep 2024 18:47:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 19:03:29 +0000   Tue, 10 Sep 2024 18:47:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 19:03:29 +0000   Tue, 10 Sep 2024 18:48:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.215.214
	  Hostname:    ha-301400-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 a9b5c4e4edd3424a92c692f0fbe76d1c
	  System UUID:                ab2bcd67-cf13-a74c-b203-d7690598cbe4
	  Boot ID:                    3b10cef0-e66b-4119-b392-8f30ff53fe8b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-65xw2       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-proxy-jprh6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x2 over 16m)  kubelet          Node ha-301400-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x2 over 16m)  kubelet          Node ha-301400-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x2 over 16m)  kubelet          Node ha-301400-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node ha-301400-m04 event: Registered Node ha-301400-m04 in Controller
	  Normal  RegisteredNode           16m                node-controller  Node ha-301400-m04 event: Registered Node ha-301400-m04 in Controller
	  Normal  RegisteredNode           16m                node-controller  Node ha-301400-m04 event: Registered Node ha-301400-m04 in Controller
	  Normal  NodeReady                15m                kubelet          Node ha-301400-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep10 18:34] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +41.794665] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.153729] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[Sep10 18:35] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[  +0.109668] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.492128] systemd-fstab-generator[1046]: Ignoring "noauto" option for root device
	[  +0.187301] systemd-fstab-generator[1058]: Ignoring "noauto" option for root device
	[  +0.214772] systemd-fstab-generator[1072]: Ignoring "noauto" option for root device
	[  +2.824271] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.190445] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.184721] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.260646] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[ +10.717407] systemd-fstab-generator[1426]: Ignoring "noauto" option for root device
	[  +0.104937] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.465855] systemd-fstab-generator[1683]: Ignoring "noauto" option for root device
	[  +5.308151] systemd-fstab-generator[1826]: Ignoring "noauto" option for root device
	[  +0.087881] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.222127] kauditd_printk_skb: 67 callbacks suppressed
	[  +2.805126] systemd-fstab-generator[2318]: Ignoring "noauto" option for root device
	[  +6.508337] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.377050] kauditd_printk_skb: 29 callbacks suppressed
	[Sep10 18:39] kauditd_printk_skb: 24 callbacks suppressed
	[Sep10 18:58] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [43a1ed13d84a] <==
	{"level":"warn","ts":"2024-09-10T19:04:09.466071Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.475185Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.480210Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.485039Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.493695Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.507213Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.523284Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.528668Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.538160Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.554045Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.554599Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.570079Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.576093Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.583952Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.589560Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.594021Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.598464Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.607487Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.616572Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.653024Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.657052Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://172.31.215.2:2380/version","remote-member-id":"750951b929933dd8","error":"Get \"https://172.31.215.2:2380/version\": dial tcp 172.31.215.2:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-10T19:04:09.657105Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"750951b929933dd8","error":"Get \"https://172.31.215.2:2380/version\": dial tcp 172.31.215.2:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-10T19:04:09.681170Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.687011Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T19:04:09.687146Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9f8d1d2f4692bc29","from":"9f8d1d2f4692bc29","remote-peer-id":"750951b929933dd8","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:04:09 up 30 min,  0 users,  load average: 0.06, 0.20, 0.26
	Linux ha-301400 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2bf8ec409658] <==
	I0910 19:03:31.016047       1 main.go:322] Node ha-301400-m04 has CIDR [10.244.3.0/24] 
	I0910 19:03:41.014748       1 main.go:295] Handling node with IPs: map[172.31.216.168:{}]
	I0910 19:03:41.014866       1 main.go:299] handling current node
	I0910 19:03:41.014885       1 main.go:295] Handling node with IPs: map[172.31.210.72:{}]
	I0910 19:03:41.014891       1 main.go:322] Node ha-301400-m02 has CIDR [10.244.1.0/24] 
	I0910 19:03:41.015363       1 main.go:295] Handling node with IPs: map[172.31.217.146:{}]
	I0910 19:03:41.015486       1 main.go:322] Node ha-301400-m03 has CIDR [10.244.2.0/24] 
	I0910 19:03:41.016085       1 main.go:295] Handling node with IPs: map[172.31.215.214:{}]
	I0910 19:03:41.016178       1 main.go:322] Node ha-301400-m04 has CIDR [10.244.3.0/24] 
	I0910 19:03:51.017714       1 main.go:295] Handling node with IPs: map[172.31.216.168:{}]
	I0910 19:03:51.018066       1 main.go:299] handling current node
	I0910 19:03:51.018089       1 main.go:295] Handling node with IPs: map[172.31.210.72:{}]
	I0910 19:03:51.018099       1 main.go:322] Node ha-301400-m02 has CIDR [10.244.1.0/24] 
	I0910 19:03:51.018213       1 main.go:295] Handling node with IPs: map[172.31.217.146:{}]
	I0910 19:03:51.018220       1 main.go:322] Node ha-301400-m03 has CIDR [10.244.2.0/24] 
	I0910 19:03:51.018615       1 main.go:295] Handling node with IPs: map[172.31.215.214:{}]
	I0910 19:03:51.018696       1 main.go:322] Node ha-301400-m04 has CIDR [10.244.3.0/24] 
	I0910 19:04:01.014489       1 main.go:295] Handling node with IPs: map[172.31.216.168:{}]
	I0910 19:04:01.014524       1 main.go:299] handling current node
	I0910 19:04:01.014541       1 main.go:295] Handling node with IPs: map[172.31.210.72:{}]
	I0910 19:04:01.014548       1 main.go:322] Node ha-301400-m02 has CIDR [10.244.1.0/24] 
	I0910 19:04:01.014718       1 main.go:295] Handling node with IPs: map[172.31.217.146:{}]
	I0910 19:04:01.014823       1 main.go:322] Node ha-301400-m03 has CIDR [10.244.2.0/24] 
	I0910 19:04:01.014902       1 main.go:295] Handling node with IPs: map[172.31.215.214:{}]
	I0910 19:04:01.014980       1 main.go:322] Node ha-301400-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [54f16f39e60d] <==
	I0910 18:35:45.598882       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0910 18:35:46.255647       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0910 18:35:46.407648       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0910 18:35:46.451775       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0910 18:35:46.473079       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0910 18:35:51.753788       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0910 18:35:52.011539       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0910 18:42:42.329092       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 78.206µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0910 18:42:42.331452       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="PATCH" URI="/api/v1/namespaces/default/events/ha-301400-m03.17f3f62e68851b41" auditID="67956b62-0a18-441e-ae27-9aaff4086808"
	E0910 18:42:42.333093       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="7.801µs" method="PATCH" path="/api/v1/namespaces/default/events/ha-301400-m03.17f3f62e68851b41" result=null
	E0910 18:43:55.757843       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64492: use of closed network connection
	E0910 18:43:57.270356       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64494: use of closed network connection
	E0910 18:43:57.700306       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64496: use of closed network connection
	E0910 18:43:58.200897       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64498: use of closed network connection
	E0910 18:43:58.634175       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64500: use of closed network connection
	E0910 18:43:59.113707       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64502: use of closed network connection
	E0910 18:43:59.564522       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64504: use of closed network connection
	E0910 18:43:59.981102       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64507: use of closed network connection
	E0910 18:44:00.425883       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64509: use of closed network connection
	E0910 18:44:01.204715       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64512: use of closed network connection
	E0910 18:44:11.641413       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64514: use of closed network connection
	E0910 18:44:12.057864       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64517: use of closed network connection
	E0910 18:44:22.479215       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64519: use of closed network connection
	E0910 18:44:22.913638       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64522: use of closed network connection
	E0910 18:44:33.334980       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:64524: use of closed network connection
	
	
	==> kube-controller-manager [8285765ba9cc] <==
	I0910 18:49:15.965970       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m02"
	I0910 18:49:23.560363       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400"
	I0910 18:53:18.258662       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m04"
	I0910 18:53:25.357934       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m03"
	I0910 18:54:20.688011       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m02"
	I0910 18:54:28.961285       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400"
	I0910 18:58:23.831345       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m04"
	I0910 18:58:32.186110       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m03"
	I0910 18:59:31.493300       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-301400-m04"
	I0910 18:59:31.494703       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m02"
	I0910 18:59:31.534948       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m02"
	I0910 18:59:31.729143       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.24966ms"
	I0910 18:59:31.729289       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="70.804µs"
	I0910 18:59:33.252189       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m02"
	I0910 18:59:35.205646       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400"
	I0910 18:59:36.749083       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m02"
	I0910 19:02:05.729869       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-301400-m04"
	I0910 19:02:05.730461       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m02"
	I0910 19:02:05.758110       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m02"
	I0910 19:02:06.785378       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m02"
	I0910 19:02:06.841292       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="87.205µs"
	I0910 19:02:09.603756       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.63113ms"
	I0910 19:02:09.604637       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.103µs"
	I0910 19:03:29.377793       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m04"
	I0910 19:03:37.498806       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-301400-m03"
	
	
	==> kube-proxy [bed0fdc399e6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 18:35:53.062905       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 18:35:53.077687       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.31.216.168"]
	E0910 18:35:53.077755       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 18:35:53.148721       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 18:35:53.148853       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 18:35:53.148893       1 server_linux.go:169] "Using iptables Proxier"
	I0910 18:35:53.153671       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 18:35:53.154563       1 server.go:483] "Version info" version="v1.31.0"
	I0910 18:35:53.154592       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:35:53.156134       1 config.go:197] "Starting service config controller"
	I0910 18:35:53.156464       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 18:35:53.156501       1 config.go:104] "Starting endpoint slice config controller"
	I0910 18:35:53.156588       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 18:35:53.157420       1 config.go:326] "Starting node config controller"
	I0910 18:35:53.157448       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 18:35:53.257160       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 18:35:53.257177       1 shared_informer.go:320] Caches are synced for service config
	I0910 18:35:53.257588       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0a05e60cd24c] <==
	E0910 18:35:44.524693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 18:35:44.556428       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0910 18:35:44.556503       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0910 18:35:44.639868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0910 18:35:44.639917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:35:44.690820       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0910 18:35:44.690871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:35:44.705642       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0910 18:35:44.705961       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 18:35:44.705917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0910 18:35:44.706143       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 18:35:44.835403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 18:35:44.835506       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0910 18:35:47.540732       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0910 18:42:41.697577       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-c72m2\": pod kindnet-c72m2 is already assigned to node \"ha-301400-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-c72m2" node="ha-301400-m03"
	E0910 18:42:41.697924       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-c72m2\": pod kindnet-c72m2 is already assigned to node \"ha-301400-m03\"" pod="kube-system/kindnet-c72m2"
	I0910 18:42:41.699423       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-c72m2" node="ha-301400-m03"
	E0910 18:42:41.698499       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sdw96\": pod kube-proxy-sdw96 is already assigned to node \"ha-301400-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sdw96" node="ha-301400-m03"
	E0910 18:42:41.702274       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 78996f3e-ec5e-4e51-a7af-e3297da1afbd(kube-system/kube-proxy-sdw96) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-sdw96"
	E0910 18:42:41.704301       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sdw96\": pod kube-proxy-sdw96 is already assigned to node \"ha-301400-m03\"" pod="kube-system/kube-proxy-sdw96"
	I0910 18:42:41.704407       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-sdw96" node="ha-301400-m03"
	E0910 18:47:42.395829       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-sxs6r\": pod kindnet-sxs6r is already assigned to node \"ha-301400-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-sxs6r" node="ha-301400-m04"
	E0910 18:47:42.396322       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 7130ac5f-3dd6-4db0-a598-f28008f3e1e7(kube-system/kindnet-sxs6r) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-sxs6r"
	E0910 18:47:42.396564       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-sxs6r\": pod kindnet-sxs6r is already assigned to node \"ha-301400-m04\"" pod="kube-system/kindnet-sxs6r"
	I0910 18:47:42.396775       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-sxs6r" node="ha-301400-m04"
	
	
	==> kubelet <==
	Sep 10 18:59:46 ha-301400 kubelet[2325]: E0910 18:59:46.509304    2325 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 18:59:46 ha-301400 kubelet[2325]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 18:59:46 ha-301400 kubelet[2325]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 18:59:46 ha-301400 kubelet[2325]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 18:59:46 ha-301400 kubelet[2325]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 19:00:46 ha-301400 kubelet[2325]: E0910 19:00:46.502846    2325 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 19:00:46 ha-301400 kubelet[2325]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 19:00:46 ha-301400 kubelet[2325]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 19:00:46 ha-301400 kubelet[2325]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 19:00:46 ha-301400 kubelet[2325]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 19:01:46 ha-301400 kubelet[2325]: E0910 19:01:46.502800    2325 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 19:01:46 ha-301400 kubelet[2325]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 19:01:46 ha-301400 kubelet[2325]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 19:01:46 ha-301400 kubelet[2325]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 19:01:46 ha-301400 kubelet[2325]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 19:02:46 ha-301400 kubelet[2325]: E0910 19:02:46.502890    2325 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 19:02:46 ha-301400 kubelet[2325]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 19:02:46 ha-301400 kubelet[2325]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 19:02:46 ha-301400 kubelet[2325]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 19:02:46 ha-301400 kubelet[2325]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 19:03:46 ha-301400 kubelet[2325]: E0910 19:03:46.502949    2325 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 19:03:46 ha-301400 kubelet[2325]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 19:03:46 ha-301400 kubelet[2325]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 19:03:46 ha-301400 kubelet[2325]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 19:03:46 ha-301400 kubelet[2325]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
E0910 19:04:10.652892    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-301400 -n ha-301400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-301400 -n ha-301400: (10.8375205s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-301400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (256.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (193.08s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-867500 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0910 19:12:59.787443    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 19:14:10.700502    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-867500 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: exit status 90 (3m13.0752022s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"274f8200-1f5f-4aed-b9d9-ec444a0245d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-867500] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bf6d8139-5b82-44ef-80e1-37a87b7bd57f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube5\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"9552b5c5-87f0-4b11-b1bb-1f3b72d9e6c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2a395a8d-4149-4966-84cd-a7c2b3875cd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"57e71073-8047-4b05-9d9e-8ab07de1a754","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19598"}}
	{"specversion":"1.0","id":"c6b54686-91da-4458-b0f6-d639f9acc0f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"72d21f79-e68a-4b13-b0e4-9b5351feec0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the hyperv driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5b0c42f7-99e2-4435-85b6-20b4daf9df0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-867500\" primary control-plane node in \"json-output-867500\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c93c87ea-7b31-4cc3-8b09-604e89090aa9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"b9566e59-167d-4144-b652-06bd9b621db5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Failing to connect to https://registry.k8s.io/ from inside the minikube VM"}}
	{"specversion":"1.0","id":"3dca8e7a-63f8-4615-8e5d-8b65984cc247","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"}}
	{"specversion":"1.0","id":"49888ec3-c76f-4a86-aa1e-c34ab5ac2b28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"90","issues":"","message":"Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1\nstdout:\n\nstderr:\nJob for docker.service failed because the control process exited with error code.\nSee \"systemctl status docker.service\" and \"journalctl -xeu docker.service\" for details.\n\nsudo journalctl --no-pager -u docker:\n-- stdout --\nSep 10 19:12:49 json-output-867500 systemd[1]: Starting Docker Application Container Engine...\nSep 10 19:12:49 json-output-867500 dockerd[656]: time=\"2024-09-10T19:12:49.967240837Z\" level=info msg=\"Starting up\"\nSep 10 19:12:49 json-output-867500 dockerd[656]: time=\"2024-09-10T19:12:49.968075162Z\" level=info msg=\"containerd not running, starting managed containerd\"\nSep 10 19:12:49 json-output-867500 dockerd[656]: ti
me=\"2024-09-10T19:12:49.968891379Z\" level=info msg=\"started new containerd process\" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=664\nSep 10 19:12:49 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:49.996894474Z\" level=info msg=\"starting containerd\" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.025446685Z\" level=info msg=\"loading plugin \\\"io.containerd.event.v1.exchange\\\"...\" type=io.containerd.event.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.025510409Z\" level=info msg=\"loading plugin \\\"io.containerd.internal.v1.opt\\\"...\" type=io.containerd.internal.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.025587337Z\" level=info msg=\"loading plugin \\\"io.containerd.warning.v1.deprecations\\\"...\" type=io.containerd.warning.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10
T19:12:50.025680272Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.blockfile\\\"...\" type=io.containerd.snapshotter.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.025764303Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.blockfile\\\"...\" error=\"no scratch file generator: skip plugin\" type=io.containerd.snapshotter.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.025873643Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.btrfs\\\"...\" type=io.containerd.snapshotter.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.026190560Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.btrfs\\\"...\" error=\"path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin\" type=io.containerd.snapshotter.v1\nSep 10 19:12:50 json-output-867500 doc
kerd[664]: time=\"2024-09-10T19:12:50.026293298Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.devmapper\\\"...\" type=io.containerd.snapshotter.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.026316706Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.devmapper\\\"...\" error=\"devmapper not configured: skip plugin\" type=io.containerd.snapshotter.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.026329211Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.native\\\"...\" type=io.containerd.snapshotter.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.026425646Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.overlayfs\\\"...\" type=io.containerd.snapshotter.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.026746865Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.aufs\\\"...\" type=io.cont
ainerd.snapshotter.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.028996794Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.aufs\\\"...\" error=\"aufs is not supported (modprobe aufs failed: exit status 1 \\\"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\\\n\\\"): skip plugin\" type=io.containerd.snapshotter.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.029078124Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.zfs\\\"...\" type=io.containerd.snapshotter.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.029197468Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.zfs\\\"...\" error=\"path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin\" type=io.containerd.snapshotter.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"20
24-09-10T19:12:50.029270995Z\" level=info msg=\"loading plugin \\\"io.containerd.content.v1.content\\\"...\" type=io.containerd.content.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.029351425Z\" level=info msg=\"loading plugin \\\"io.containerd.metadata.v1.bolt\\\"...\" type=io.containerd.metadata.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.029458764Z\" level=info msg=\"metadata content store policy set\" policy=shared\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.055378516Z\" level=info msg=\"loading plugin \\\"io.containerd.gc.v1.scheduler\\\"...\" type=io.containerd.gc.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.055507564Z\" level=info msg=\"loading plugin \\\"io.containerd.differ.v1.walking\\\"...\" type=io.containerd.differ.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.055645115Z\" level=info msg=\"loading plugin \\\"io.containerd.lease
.v1.manager\\\"...\" type=io.containerd.lease.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.055826482Z\" level=info msg=\"loading plugin \\\"io.containerd.streaming.v1.manager\\\"...\" type=io.containerd.streaming.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.055884703Z\" level=info msg=\"loading plugin \\\"io.containerd.runtime.v1.linux\\\"...\" type=io.containerd.runtime.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.056121290Z\" level=info msg=\"loading plugin \\\"io.containerd.monitor.v1.cgroups\\\"...\" type=io.containerd.monitor.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.056789136Z\" level=info msg=\"loading plugin \\\"io.containerd.runtime.v2.task\\\"...\" type=io.containerd.runtime.v2\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.057091748Z\" level=info msg=\"loading plugin \\\"io.containerd.runtime.v2.shim\\\"...\" type=io.containerd.
runtime.v2\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.057148569Z\" level=info msg=\"loading plugin \\\"io.containerd.sandbox.store.v1.local\\\"...\" type=io.containerd.sandbox.store.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.057178880Z\" level=info msg=\"loading plugin \\\"io.containerd.sandbox.controller.v1.local\\\"...\" type=io.containerd.sandbox.controller.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.057201588Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.containers-service\\\"...\" type=io.containerd.service.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.057224197Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.content-service\\\"...\" type=io.containerd.service.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.057263911Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.diff-service\\\"...\"
type=io.containerd.service.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.057290621Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.images-service\\\"...\" type=io.containerd.service.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.057315030Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.introspection-service\\\"...\" type=io.containerd.service.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.057358046Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.namespaces-service\\\"...\" type=io.containerd.service.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.057379954Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.snapshots-service\\\"...\" type=io.containerd.service.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.057404163Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.tasks-se
rvice\\\"...\" type=io.containerd.service.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.057439076Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.containers\\\"...\" type=io.containerd.grpc.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.057491995Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.content\\\"...\" type=io.containerd.grpc.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.057671862Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.diff\\\"...\" type=io.containerd.grpc.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.057701072Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.events\\\"...\" type=io.containerd.grpc.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.057749290Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.images\\\"...\" type=io.containerd.grpc.v1\nSep 10 19:12:50 json
-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.057887441Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.introspection\\\"...\" type=io.containerd.grpc.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.057920053Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.leases\\\"...\" type=io.containerd.grpc.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.058014188Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.namespaces\\\"...\" type=io.containerd.grpc.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.058058004Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.sandbox-controllers\\\"...\" type=io.containerd.grpc.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.058084214Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.sandboxes\\\"...\" type=io.containerd.grpc.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-0
9-10T19:12:50.058232268Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.snapshots\\\"...\" type=io.containerd.grpc.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.058271783Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.streaming\\\"...\" type=io.containerd.grpc.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.058414035Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.tasks\\\"...\" type=io.containerd.grpc.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.058445747Z\" level=info msg=\"loading plugin \\\"io.containerd.transfer.v1.local\\\"...\" type=io.containerd.transfer.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.058482460Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.transfer\\\"...\" type=io.containerd.grpc.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.058524976Z\" level=info msg=\"loading plu
gin \\\"io.containerd.grpc.v1.version\\\"...\" type=io.containerd.grpc.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.058566591Z\" level=info msg=\"loading plugin \\\"io.containerd.internal.v1.restart\\\"...\" type=io.containerd.internal.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.058657425Z\" level=info msg=\"loading plugin \\\"io.containerd.tracing.processor.v1.otlp\\\"...\" type=io.containerd.tracing.processor.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.058686235Z\" level=info msg=\"skip loading plugin \\\"io.containerd.tracing.processor.v1.otlp\\\"...\" error=\"skip plugin: tracing endpoint not configured\" type=io.containerd.tracing.processor.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.058705743Z\" level=info msg=\"loading plugin \\\"io.containerd.internal.v1.tracing\\\"...\" type=io.containerd.internal.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"202
4-09-10T19:12:50.058729551Z\" level=info msg=\"skip loading plugin \\\"io.containerd.internal.v1.tracing\\\"...\" error=\"skip plugin: tracing endpoint not configured\" type=io.containerd.internal.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.058766965Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.healthcheck\\\"...\" type=io.containerd.grpc.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.058789574Z\" level=info msg=\"loading plugin \\\"io.containerd.nri.v1.nri\\\"...\" type=io.containerd.nri.v1\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.058821085Z\" level=info msg=\"NRI interface is disabled by configuration.\"\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.059367086Z\" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.059470925Z\" level=info msg=serving...
address=/var/run/docker/containerd/containerd.sock.ttrpc\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.059556356Z\" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock\nSep 10 19:12:50 json-output-867500 dockerd[664]: time=\"2024-09-10T19:12:50.059604174Z\" level=info msg=\"containerd successfully booted in 0.064031s\"\nSep 10 19:12:51 json-output-867500 dockerd[656]: time=\"2024-09-10T19:12:51.043541872Z\" level=info msg=\"[graphdriver] trying configured driver: overlay2\"\nSep 10 19:12:51 json-output-867500 dockerd[656]: time=\"2024-09-10T19:12:51.079794035Z\" level=info msg=\"Loading containers: start.\"\nSep 10 19:12:51 json-output-867500 dockerd[656]: time=\"2024-09-10T19:12:51.233827837Z\" level=warning msg=\"ip6tables is enabled, but cannot set up ip6tables chains\" error=\"failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (
do you need to insmod?)\\nPerhaps ip6tables or your kernel needs to be upgraded.\\n (exit status 3)\"\nSep 10 19:12:51 json-output-867500 dockerd[656]: time=\"2024-09-10T19:12:51.431557670Z\" level=info msg=\"Loading containers: done.\"\nSep 10 19:12:51 json-output-867500 dockerd[656]: time=\"2024-09-10T19:12:51.450680089Z\" level=info msg=\"Docker daemon\" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0\nSep 10 19:12:51 json-output-867500 dockerd[656]: time=\"2024-09-10T19:12:51.450878258Z\" level=info msg=\"Daemon has completed initialization\"\nSep 10 19:12:51 json-output-867500 dockerd[656]: time=\"2024-09-10T19:12:51.558366126Z\" level=info msg=\"API listen on /var/run/docker.sock\"\nSep 10 19:12:51 json-output-867500 dockerd[656]: time=\"2024-09-10T19:12:51.558458358Z\" level=info msg=\"API listen on [::]:2376\"\nSep 10 19:12:51 json-output-867500 systemd[1]: Started Docker Application Container Engine.\nSep 10 19:13:17 json-output-867500 dockerd[656]: time=\"2024-09-
10T19:13:17.692652739Z\" level=info msg=\"Processing signal 'terminated'\"\nSep 10 19:13:17 json-output-867500 systemd[1]: Stopping Docker Application Container Engine...\nSep 10 19:13:17 json-output-867500 dockerd[656]: time=\"2024-09-10T19:13:17.694290042Z\" level=info msg=\"stopping event stream following graceful shutdown\" error=\"\u003cnil\u003e\" module=libcontainerd namespace=moby\nSep 10 19:13:17 json-output-867500 dockerd[656]: time=\"2024-09-10T19:13:17.694405049Z\" level=info msg=\"Daemon shutdown complete\"\nSep 10 19:13:17 json-output-867500 dockerd[656]: time=\"2024-09-10T19:13:17.694472353Z\" level=info msg=\"stopping healthcheck following graceful shutdown\" module=libcontainerd\nSep 10 19:13:17 json-output-867500 dockerd[656]: time=\"2024-09-10T19:13:17.694576860Z\" level=info msg=\"stopping event stream following graceful shutdown\" error=\"context canceled\" module=libcontainerd namespace=plugins.moby\nSep 10 19:13:18 json-output-867500 systemd[1]: docker.service: Deactivated successfully.
\nSep 10 19:13:18 json-output-867500 systemd[1]: Stopped Docker Application Container Engine.\nSep 10 19:13:18 json-output-867500 systemd[1]: Starting Docker Application Container Engine...\nSep 10 19:13:18 json-output-867500 dockerd[1064]: time=\"2024-09-10T19:13:18.745181119Z\" level=info msg=\"Starting up\"\nSep 10 19:14:18 json-output-867500 dockerd[1064]: failed to start daemon: failed to dial \"/run/containerd/containerd.sock\": failed to dial \"/run/containerd/containerd.sock\": context deadline exceeded\nSep 10 19:14:18 json-output-867500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE\nSep 10 19:14:18 json-output-867500 systemd[1]: docker.service: Failed with result 'exit-code'.\nSep 10 19:14:18 json-output-867500 systemd[1]: Failed to start Docker Application Container Engine.\n\n-- /stdout --","name":"RUNTIME_ENABLE","url":""}}
	{"specversion":"1.0","id":"8baaaa62-7db7-45a3-9bba-147b933f4c62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe start -p json-output-867500 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv": exit status 90
--- FAIL: TestJSONOutput/start/Command (193.08s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.53s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-867500 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p json-output-867500 --output=json --user=testUser: exit status 80 (7.5237914s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"158f5f93-dc2e-4312-96b2-0209ce0ba234","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-867500 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"52cd48f0-6a15-4492-9fc0-c0b2449aacc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1\nstdout:\n\nstderr:\nFailed to disable unit: Unit file kubelet.service does not exist.","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"9db92a6c-c2bb-4bb3-abba-bd260863c108","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                     │\n│    If the above advice does not help, please let us know:                                                           │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                         │\n│
│\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │\n│    Please also attach the following file to the GitHub issue:                                                       │\n│    - C:\\Users\\jenkins.minikube5\\AppData\\Local\\Temp\\minikube_pause_67bb464d067bc7ce14c8f0e8f875af68a444b926_0.log    │\n│                                                                                                                     │\n╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe pause -p json-output-867500 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (7.53s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (52.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-867500 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe unpause -p json-output-867500 --output=json --user=testUser: exit status 80 (52.6089335s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b8ee66d2-60c9-4d5a-a5b9-ddb240a06031","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-867500 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"66095261-f7eb-4e99-b7f4-ad36d8d03e52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format=\u003cno value\u003e: Process exited with status 1\nstdout:\n\nstderr:\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"c4704383-b95e-492d-ba86-86be52a96616","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                       │\n│    If the above advice does not help, please let us know:                                                             │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                           │\n│
│\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │\n│    Please also attach the following file to the GitHub issue:                                                         │\n│    - C:\\Users\\jenkins.minikube5\\AppData\\Local\\Temp\\minikube_unpause_ec0e7c403dcebc8612cf397efd999a50731c57ec_0.log    │\n│                                                                                                                       │\n╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe unpause -p json-output-867500 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (52.62s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (52.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-629100 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-629100 -- exec busybox-7dff88458-7c4qt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-629100 -- exec busybox-7dff88458-7c4qt -- sh -c "ping -c 1 172.31.208.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-629100 -- exec busybox-7dff88458-7c4qt -- sh -c "ping -c 1 172.31.208.1": exit status 1 (10.3982187s)

                                                
                                                
-- stdout --
	PING 172.31.208.1 (172.31.208.1): 56 data bytes
	
	--- 172.31.208.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.31.208.1) from pod (busybox-7dff88458-7c4qt): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-629100 -- exec busybox-7dff88458-lzs87 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-629100 -- exec busybox-7dff88458-lzs87 -- sh -c "ping -c 1 172.31.208.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-629100 -- exec busybox-7dff88458-lzs87 -- sh -c "ping -c 1 172.31.208.1": exit status 1 (10.4113209s)

                                                
                                                
-- stdout --
	PING 172.31.208.1 (172.31.208.1): 56 data bytes
	
	--- 172.31.208.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.31.208.1) from pod (busybox-7dff88458-lzs87): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-629100 -n multinode-629100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-629100 -n multinode-629100: (10.6465223s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 logs -n 25: (7.5543264s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-038400 ssh -- ls                    | mount-start-2-038400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:29 UTC | 10 Sep 24 19:29 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-038400                           | mount-start-1-038400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:29 UTC | 10 Sep 24 19:29 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-038400 ssh -- ls                    | mount-start-2-038400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:29 UTC | 10 Sep 24 19:30 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-038400                           | mount-start-2-038400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:30 UTC | 10 Sep 24 19:30 UTC |
	| start   | -p mount-start-2-038400                           | mount-start-2-038400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:30 UTC | 10 Sep 24 19:32 UTC |
	| mount   | C:\Users\jenkins.minikube5:/minikube-host         | mount-start-2-038400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:32 UTC |                     |
	|         | --profile mount-start-2-038400 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-038400 ssh -- ls                    | mount-start-2-038400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:32 UTC | 10 Sep 24 19:32 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-038400                           | mount-start-2-038400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:32 UTC | 10 Sep 24 19:32 UTC |
	| delete  | -p mount-start-1-038400                           | mount-start-1-038400 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:32 UTC | 10 Sep 24 19:32 UTC |
	| start   | -p multinode-629100                               | multinode-629100     | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:32 UTC | 10 Sep 24 19:39 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-629100 -- apply -f                   | multinode-629100     | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:39 UTC | 10 Sep 24 19:39 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-629100 -- rollout                    | multinode-629100     | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:39 UTC | 10 Sep 24 19:39 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-629100 -- get pods -o                | multinode-629100     | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:39 UTC | 10 Sep 24 19:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-629100 -- get pods -o                | multinode-629100     | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:39 UTC | 10 Sep 24 19:39 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-629100 -- exec                       | multinode-629100     | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:39 UTC | 10 Sep 24 19:39 UTC |
	|         | busybox-7dff88458-7c4qt --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-629100 -- exec                       | multinode-629100     | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:39 UTC | 10 Sep 24 19:39 UTC |
	|         | busybox-7dff88458-lzs87 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-629100 -- exec                       | multinode-629100     | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:39 UTC | 10 Sep 24 19:39 UTC |
	|         | busybox-7dff88458-7c4qt --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-629100 -- exec                       | multinode-629100     | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:39 UTC | 10 Sep 24 19:39 UTC |
	|         | busybox-7dff88458-lzs87 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-629100 -- exec                       | multinode-629100     | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:39 UTC | 10 Sep 24 19:39 UTC |
	|         | busybox-7dff88458-7c4qt -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-629100 -- exec                       | multinode-629100     | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:39 UTC | 10 Sep 24 19:39 UTC |
	|         | busybox-7dff88458-lzs87 -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-629100 -- get pods -o                | multinode-629100     | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:39 UTC | 10 Sep 24 19:39 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-629100 -- exec                       | multinode-629100     | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:39 UTC | 10 Sep 24 19:39 UTC |
	|         | busybox-7dff88458-7c4qt                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-629100 -- exec                       | multinode-629100     | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:39 UTC |                     |
	|         | busybox-7dff88458-7c4qt -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.31.208.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-629100 -- exec                       | multinode-629100     | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:39 UTC | 10 Sep 24 19:39 UTC |
	|         | busybox-7dff88458-lzs87                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-629100 -- exec                       | multinode-629100     | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:39 UTC |                     |
	|         | busybox-7dff88458-lzs87 -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.31.208.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 19:32:51
	Running on machine: minikube5
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 19:32:51.411679     716 out.go:345] Setting OutFile to fd 968 ...
	I0910 19:32:51.462692     716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 19:32:51.462692     716 out.go:358] Setting ErrFile to fd 1476...
	I0910 19:32:51.462692     716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 19:32:51.479462     716 out.go:352] Setting JSON to false
	I0910 19:32:51.481463     716 start.go:129] hostinfo: {"hostname":"minikube5","uptime":108034,"bootTime":1725888736,"procs":180,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4842 Build 19045.4842","kernelVersion":"10.0.19045.4842 Build 19045.4842","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0910 19:32:51.482464     716 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 19:32:51.486345     716 out.go:177] * [multinode-629100] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	I0910 19:32:51.491738     716 notify.go:220] Checking for updates...
	I0910 19:32:51.492084     716 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 19:32:51.493868     716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 19:32:51.496933     716 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0910 19:32:51.499771     716 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 19:32:51.502926     716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 19:32:51.506426     716 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:32:51.506426     716 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 19:32:56.205232     716 out.go:177] * Using the hyperv driver based on user configuration
	I0910 19:32:56.210421     716 start.go:297] selected driver: hyperv
	I0910 19:32:56.210537     716 start.go:901] validating driver "hyperv" against <nil>
	I0910 19:32:56.210617     716 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 19:32:56.260091     716 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 19:32:56.261127     716 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:32:56.261127     716 cni.go:84] Creating CNI manager for ""
	I0910 19:32:56.261127     716 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0910 19:32:56.261127     716 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0910 19:32:56.261646     716 start.go:340] cluster config:
	{Name:multinode-629100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:32:56.262041     716 iso.go:125] acquiring lock: {Name:mkb663b978f90cccd76348bddb3ea027f6ac4252 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 19:32:56.265726     716 out.go:177] * Starting "multinode-629100" primary control-plane node in "multinode-629100" cluster
	I0910 19:32:56.270533     716 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 19:32:56.270706     716 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0910 19:32:56.270706     716 cache.go:56] Caching tarball of preloaded images
	I0910 19:32:56.270706     716 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0910 19:32:56.270706     716 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 19:32:56.270706     716 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json ...
	I0910 19:32:56.270706     716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json: {Name:mk844d6627842d368d89963794811dbb4aec3d40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:32:56.271579     716 start.go:360] acquireMachinesLock for multinode-629100: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 19:32:56.271579     716 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-629100"
	I0910 19:32:56.272584     716 start.go:93] Provisioning new machine with config: &{Name:multinode-629100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 19:32:56.272584     716 start.go:125] createHost starting for "" (driver="hyperv")
	I0910 19:32:56.274584     716 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 19:32:56.274584     716 start.go:159] libmachine.API.Create for "multinode-629100" (driver="hyperv")
	I0910 19:32:56.275581     716 client.go:168] LocalClient.Create starting
	I0910 19:32:56.275581     716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0910 19:32:56.275581     716 main.go:141] libmachine: Decoding PEM data...
	I0910 19:32:56.275581     716 main.go:141] libmachine: Parsing certificate...
	I0910 19:32:56.275581     716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0910 19:32:56.275581     716 main.go:141] libmachine: Decoding PEM data...
	I0910 19:32:56.275581     716 main.go:141] libmachine: Parsing certificate...
	I0910 19:32:56.276582     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0910 19:32:58.138937     716 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0910 19:32:58.139570     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:32:58.139607     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0910 19:32:59.646149     716 main.go:141] libmachine: [stdout =====>] : False
	
	I0910 19:32:59.646149     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:32:59.646703     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0910 19:33:01.056913     716 main.go:141] libmachine: [stdout =====>] : True
	
	I0910 19:33:01.056913     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:01.057427     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0910 19:33:04.228356     716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0910 19:33:04.228356     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:04.231141     716 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 19:33:04.592154     716 main.go:141] libmachine: Creating SSH key...
	I0910 19:33:04.882805     716 main.go:141] libmachine: Creating VM...
	I0910 19:33:04.882805     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0910 19:33:07.400076     716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0910 19:33:07.400076     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:07.400076     716 main.go:141] libmachine: Using switch "Default Switch"
	I0910 19:33:07.400076     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0910 19:33:08.989280     716 main.go:141] libmachine: [stdout =====>] : True
	
	I0910 19:33:08.989280     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:08.989280     716 main.go:141] libmachine: Creating VHD
	I0910 19:33:08.989918     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0910 19:33:12.322999     716 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : CAAE934B-74A7-4101-8281-CDE6E9FCF162
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0910 19:33:12.324179     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:12.324304     716 main.go:141] libmachine: Writing magic tar header
	I0910 19:33:12.324304     716 main.go:141] libmachine: Writing SSH key tar header
	I0910 19:33:12.338654     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0910 19:33:15.267401     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:33:15.267401     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:15.267475     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\disk.vhd' -SizeBytes 20000MB
	I0910 19:33:17.556262     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:33:17.556262     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:17.556334     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-629100 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0910 19:33:20.677071     716 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-629100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0910 19:33:20.677071     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:20.677071     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-629100 -DynamicMemoryEnabled $false
	I0910 19:33:22.668581     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:33:22.668581     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:22.669621     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-629100 -Count 2
	I0910 19:33:24.567575     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:33:24.567575     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:24.567947     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-629100 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\boot2docker.iso'
	I0910 19:33:26.834878     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:33:26.834878     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:26.835537     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-629100 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\disk.vhd'
	I0910 19:33:29.190151     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:33:29.190151     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:29.190151     716 main.go:141] libmachine: Starting VM...
	I0910 19:33:29.190404     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-629100
	I0910 19:33:31.947656     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:33:31.947656     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:31.947656     716 main.go:141] libmachine: Waiting for host to start...
	I0910 19:33:31.948162     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:33:33.931445     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:33:33.931445     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:33.931445     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:33:36.130511     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:33:36.130803     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:37.142900     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:33:39.146956     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:33:39.147029     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:39.147101     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:33:41.395577     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:33:41.396001     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:42.405109     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:33:44.403307     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:33:44.403307     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:44.403629     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:33:46.632395     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:33:46.632395     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:47.636339     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:33:49.637668     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:33:49.637668     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:49.637739     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:33:51.852146     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:33:51.852848     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:52.865934     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:33:54.835503     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:33:54.835503     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:54.836412     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:33:57.153333     716 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:33:57.153333     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:57.154374     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:33:59.075032     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:33:59.075032     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:33:59.075032     716 machine.go:93] provisionDockerMachine start ...
	I0910 19:33:59.075032     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:34:00.980196     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:34:00.980196     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:00.981125     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:34:03.323169     716 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:34:03.323169     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:03.326622     716 main.go:141] libmachine: Using SSH client type: native
	I0910 19:34:03.338351     716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.71 22 <nil> <nil>}
	I0910 19:34:03.338351     716 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 19:34:03.471025     716 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 19:34:03.471025     716 buildroot.go:166] provisioning hostname "multinode-629100"
	I0910 19:34:03.471025     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:34:05.386771     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:34:05.386771     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:05.386906     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:34:07.703139     716 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:34:07.703139     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:07.706835     716 main.go:141] libmachine: Using SSH client type: native
	I0910 19:34:07.707432     716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.71 22 <nil> <nil>}
	I0910 19:34:07.707432     716 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-629100 && echo "multinode-629100" | sudo tee /etc/hostname
	I0910 19:34:07.865192     716 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-629100
	
	I0910 19:34:07.865298     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:34:09.804437     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:34:09.804437     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:09.804618     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:34:12.099175     716 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:34:12.099175     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:12.103641     716 main.go:141] libmachine: Using SSH client type: native
	I0910 19:34:12.104243     716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.71 22 <nil> <nil>}
	I0910 19:34:12.104243     716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-629100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-629100/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-629100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 19:34:12.259294     716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 19:34:12.259373     716 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0910 19:34:12.259373     716 buildroot.go:174] setting up certificates
	I0910 19:34:12.259373     716 provision.go:84] configureAuth start
	I0910 19:34:12.259487     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:34:14.184056     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:34:14.184805     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:14.184805     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:34:16.498381     716 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:34:16.498381     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:16.498594     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:34:18.402787     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:34:18.402787     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:18.402787     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:34:20.703439     716 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:34:20.703439     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:20.703439     716 provision.go:143] copyHostCerts
	I0910 19:34:20.703439     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0910 19:34:20.704245     716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0910 19:34:20.704245     716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0910 19:34:20.704668     716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0910 19:34:20.706190     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0910 19:34:20.706190     716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0910 19:34:20.706190     716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0910 19:34:20.706790     716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0910 19:34:20.708123     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0910 19:34:20.708283     716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0910 19:34:20.708283     716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0910 19:34:20.708820     716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0910 19:34:20.710350     716 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-629100 san=[127.0.0.1 172.31.210.71 localhost minikube multinode-629100]
	I0910 19:34:20.863594     716 provision.go:177] copyRemoteCerts
	I0910 19:34:20.873223     716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 19:34:20.873223     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:34:22.791016     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:34:22.791016     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:22.791016     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:34:25.085217     716 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:34:25.085217     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:25.086378     716 sshutil.go:53] new ssh client: &{IP:172.31.210.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 19:34:25.194385     716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3208245s)
	I0910 19:34:25.194455     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0910 19:34:25.195024     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 19:34:25.236502     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0910 19:34:25.236855     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0910 19:34:25.282879     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0910 19:34:25.283429     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 19:34:25.326125     716 provision.go:87] duration metric: took 13.0658737s to configureAuth
	I0910 19:34:25.326125     716 buildroot.go:189] setting minikube options for container-runtime
	I0910 19:34:25.327096     716 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:34:25.327140     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:34:27.241328     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:34:27.241328     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:27.241578     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:34:29.540710     716 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:34:29.541588     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:29.545584     716 main.go:141] libmachine: Using SSH client type: native
	I0910 19:34:29.546024     716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.71 22 <nil> <nil>}
	I0910 19:34:29.546124     716 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 19:34:29.675574     716 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 19:34:29.675574     716 buildroot.go:70] root file system type: tmpfs
	I0910 19:34:29.675574     716 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 19:34:29.676101     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:34:31.588318     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:34:31.589320     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:31.589380     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:34:33.867031     716 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:34:33.867031     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:33.871436     716 main.go:141] libmachine: Using SSH client type: native
	I0910 19:34:33.872167     716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.71 22 <nil> <nil>}
	I0910 19:34:33.872167     716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 19:34:34.025179     716 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 19:34:34.025298     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:34:35.912702     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:34:35.912828     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:35.912828     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:34:38.245981     716 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:34:38.246256     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:38.253484     716 main.go:141] libmachine: Using SSH client type: native
	I0910 19:34:38.253484     716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.71 22 <nil> <nil>}
	I0910 19:34:38.253484     716 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 19:34:40.372059     716 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 19:34:40.372150     716 machine.go:96] duration metric: took 41.2943438s to provisionDockerMachine
	I0910 19:34:40.372150     716 client.go:171] duration metric: took 1m44.0895656s to LocalClient.Create
	I0910 19:34:40.372219     716 start.go:167] duration metric: took 1m44.0906313s to libmachine.API.Create "multinode-629100"
	I0910 19:34:40.372258     716 start.go:293] postStartSetup for "multinode-629100" (driver="hyperv")
	I0910 19:34:40.372371     716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 19:34:40.383430     716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 19:34:40.383430     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:34:42.299030     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:34:42.299379     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:42.299379     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:34:44.608059     716 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:34:44.608703     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:44.608783     716 sshutil.go:53] new ssh client: &{IP:172.31.210.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 19:34:44.725024     716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3411616s)
	I0910 19:34:44.736684     716 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 19:34:44.741737     716 command_runner.go:130] > NAME=Buildroot
	I0910 19:34:44.742456     716 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0910 19:34:44.742456     716 command_runner.go:130] > ID=buildroot
	I0910 19:34:44.742456     716 command_runner.go:130] > VERSION_ID=2023.02.9
	I0910 19:34:44.742456     716 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0910 19:34:44.742456     716 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 19:34:44.742456     716 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0910 19:34:44.742854     716 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0910 19:34:44.743423     716 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> 47242.pem in /etc/ssl/certs
	I0910 19:34:44.743423     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /etc/ssl/certs/47242.pem
	I0910 19:34:44.752340     716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 19:34:44.768089     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /etc/ssl/certs/47242.pem (1708 bytes)
	I0910 19:34:44.808034     716 start.go:296] duration metric: took 4.4354142s for postStartSetup
	I0910 19:34:44.811789     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:34:46.732981     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:34:46.732981     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:46.733345     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:34:48.985519     716 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:34:48.985519     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:48.986409     716 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json ...
	I0910 19:34:48.988484     716 start.go:128] duration metric: took 1m52.7077928s to createHost
	I0910 19:34:48.988545     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:34:50.814507     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:34:50.814507     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:50.815218     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:34:53.099755     716 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:34:53.099755     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:53.105910     716 main.go:141] libmachine: Using SSH client type: native
	I0910 19:34:53.105910     716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.71 22 <nil> <nil>}
	I0910 19:34:53.106461     716 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 19:34:53.246559     716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725996893.467092302
	
	I0910 19:34:53.246747     716 fix.go:216] guest clock: 1725996893.467092302
	I0910 19:34:53.246747     716 fix.go:229] Guest: 2024-09-10 19:34:53.467092302 +0000 UTC Remote: 2024-09-10 19:34:48.9885453 +0000 UTC m=+117.640219101 (delta=4.478547002s)
	I0910 19:34:53.246864     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:34:55.153372     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:34:55.153372     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:55.153372     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:34:57.431337     716 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:34:57.431337     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:57.435790     716 main.go:141] libmachine: Using SSH client type: native
	I0910 19:34:57.435790     716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.71 22 <nil> <nil>}
	I0910 19:34:57.435790     716 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1725996893
	I0910 19:34:57.586705     716 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Sep 10 19:34:53 UTC 2024
	
	I0910 19:34:57.586765     716 fix.go:236] clock set: Tue Sep 10 19:34:53 UTC 2024
	 (err=<nil>)
	I0910 19:34:57.586765     716 start.go:83] releasing machines lock for "multinode-629100", held for 2m1.3070272s
	I0910 19:34:57.586950     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:34:59.501102     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:34:59.501102     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:34:59.501102     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:35:01.813366     716 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:35:01.813366     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:35:01.816294     716 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0910 19:35:01.816375     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:35:01.823551     716 ssh_runner.go:195] Run: cat /version.json
	I0910 19:35:01.823551     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:35:03.863848     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:35:03.863848     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:35:03.864229     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:35:03.865536     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:35:03.866066     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:35:03.866213     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:35:06.211096     716 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:35:06.211096     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:35:06.211464     716 sshutil.go:53] new ssh client: &{IP:172.31.210.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 19:35:06.232677     716 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:35:06.233074     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:35:06.233227     716 sshutil.go:53] new ssh client: &{IP:172.31.210.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 19:35:06.302335     716 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0910 19:35:06.303470     716 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.4867837s)
	W0910 19:35:06.303539     716 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0910 19:35:06.341284     716 command_runner.go:130] > {"iso_version": "v1.34.0-1725912912-19598", "kicbase_version": "v0.0.45", "minikube_version": "v1.34.0", "commit": "a47e98bacf93197560d0f08408949de0434951d5"}
	I0910 19:35:06.341284     716 ssh_runner.go:235] Completed: cat /version.json: (4.5174299s)
	I0910 19:35:06.356759     716 ssh_runner.go:195] Run: systemctl --version
	I0910 19:35:06.365903     716 command_runner.go:130] > systemd 252 (252)
	I0910 19:35:06.366309     716 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0910 19:35:06.376872     716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0910 19:35:06.384104     716 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0910 19:35:06.384859     716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 19:35:06.393722     716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 19:35:06.419989     716 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0910 19:35:06.419989     716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 19:35:06.420173     716 start.go:495] detecting cgroup driver to use...
	I0910 19:35:06.420507     716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 19:35:06.451926     716 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0910 19:35:06.462929     716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 19:35:06.489500     716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 19:35:06.507568     716 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 19:35:06.514869     716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 19:35:06.546279     716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W0910 19:35:06.557124     716 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0910 19:35:06.557179     716 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0910 19:35:06.578096     716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 19:35:06.604872     716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 19:35:06.632815     716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 19:35:06.659173     716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 19:35:06.686728     716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 19:35:06.713263     716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 19:35:06.740295     716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 19:35:06.756387     716 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0910 19:35:06.766802     716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 19:35:06.799666     716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:35:06.982771     716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 19:35:07.010408     716 start.go:495] detecting cgroup driver to use...
	I0910 19:35:07.021912     716 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 19:35:07.043176     716 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0910 19:35:07.043176     716 command_runner.go:130] > [Unit]
	I0910 19:35:07.043176     716 command_runner.go:130] > Description=Docker Application Container Engine
	I0910 19:35:07.043176     716 command_runner.go:130] > Documentation=https://docs.docker.com
	I0910 19:35:07.043176     716 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0910 19:35:07.043176     716 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0910 19:35:07.043176     716 command_runner.go:130] > StartLimitBurst=3
	I0910 19:35:07.043176     716 command_runner.go:130] > StartLimitIntervalSec=60
	I0910 19:35:07.043176     716 command_runner.go:130] > [Service]
	I0910 19:35:07.043176     716 command_runner.go:130] > Type=notify
	I0910 19:35:07.043176     716 command_runner.go:130] > Restart=on-failure
	I0910 19:35:07.043176     716 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0910 19:35:07.043176     716 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0910 19:35:07.043176     716 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0910 19:35:07.043176     716 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0910 19:35:07.043176     716 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0910 19:35:07.043176     716 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0910 19:35:07.043176     716 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0910 19:35:07.043176     716 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0910 19:35:07.043176     716 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0910 19:35:07.043176     716 command_runner.go:130] > ExecStart=
	I0910 19:35:07.043176     716 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0910 19:35:07.043176     716 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0910 19:35:07.043176     716 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0910 19:35:07.043176     716 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0910 19:35:07.043176     716 command_runner.go:130] > LimitNOFILE=infinity
	I0910 19:35:07.043176     716 command_runner.go:130] > LimitNPROC=infinity
	I0910 19:35:07.043176     716 command_runner.go:130] > LimitCORE=infinity
	I0910 19:35:07.043176     716 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0910 19:35:07.043176     716 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0910 19:35:07.043176     716 command_runner.go:130] > TasksMax=infinity
	I0910 19:35:07.043176     716 command_runner.go:130] > TimeoutStartSec=0
	I0910 19:35:07.044006     716 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0910 19:35:07.044006     716 command_runner.go:130] > Delegate=yes
	I0910 19:35:07.044006     716 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0910 19:35:07.044006     716 command_runner.go:130] > KillMode=process
	I0910 19:35:07.044006     716 command_runner.go:130] > [Install]
	I0910 19:35:07.044052     716 command_runner.go:130] > WantedBy=multi-user.target
	I0910 19:35:07.053623     716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:35:07.079686     716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 19:35:07.113636     716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:35:07.143868     716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 19:35:07.174277     716 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 19:35:07.233157     716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 19:35:07.253574     716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 19:35:07.284324     716 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0910 19:35:07.293647     716 ssh_runner.go:195] Run: which cri-dockerd
	I0910 19:35:07.299091     716 command_runner.go:130] > /usr/bin/cri-dockerd
	I0910 19:35:07.308405     716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 19:35:07.324161     716 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 19:35:07.364439     716 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 19:35:07.550055     716 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 19:35:07.721411     716 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 19:35:07.721411     716 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 19:35:07.760758     716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:35:07.930316     716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 19:35:10.466823     716 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5363376s)
	I0910 19:35:10.475176     716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 19:35:10.507663     716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 19:35:10.541483     716 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 19:35:10.726585     716 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 19:35:10.911145     716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:35:11.092217     716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 19:35:11.131747     716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 19:35:11.163303     716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:35:11.354538     716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 19:35:11.457155     716 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 19:35:11.469048     716 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 19:35:11.476635     716 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0910 19:35:11.476635     716 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0910 19:35:11.476635     716 command_runner.go:130] > Device: 0,22	Inode: 893         Links: 1
	I0910 19:35:11.476635     716 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0910 19:35:11.476635     716 command_runner.go:130] > Access: 2024-09-10 19:35:11.603471949 +0000
	I0910 19:35:11.476635     716 command_runner.go:130] > Modify: 2024-09-10 19:35:11.603471949 +0000
	I0910 19:35:11.476635     716 command_runner.go:130] > Change: 2024-09-10 19:35:11.606472143 +0000
	I0910 19:35:11.476635     716 command_runner.go:130] >  Birth: -
	I0910 19:35:11.476635     716 start.go:563] Will wait 60s for crictl version
	I0910 19:35:11.485854     716 ssh_runner.go:195] Run: which crictl
	I0910 19:35:11.491453     716 command_runner.go:130] > /usr/bin/crictl
	I0910 19:35:11.500287     716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 19:35:11.549372     716 command_runner.go:130] > Version:  0.1.0
	I0910 19:35:11.549372     716 command_runner.go:130] > RuntimeName:  docker
	I0910 19:35:11.549679     716 command_runner.go:130] > RuntimeVersion:  27.2.0
	I0910 19:35:11.549679     716 command_runner.go:130] > RuntimeApiVersion:  v1
	I0910 19:35:11.549679     716 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0910 19:35:11.556942     716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 19:35:11.590864     716 command_runner.go:130] > 27.2.0
	I0910 19:35:11.599254     716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 19:35:11.628571     716 command_runner.go:130] > 27.2.0
	I0910 19:35:11.634515     716 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0910 19:35:11.634633     716 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0910 19:35:11.637583     716 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0910 19:35:11.637583     716 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0910 19:35:11.637583     716 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0910 19:35:11.637583     716 ip.go:211] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:36:e6 Flags:up|broadcast|multicast|running}
	I0910 19:35:11.640269     716 ip.go:214] interface addr: fe80::bc85:cd58:cadb:2533/64
	I0910 19:35:11.640269     716 ip.go:214] interface addr: 172.31.208.1/20
	I0910 19:35:11.649798     716 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0910 19:35:11.655451     716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:35:11.676011     716 kubeadm.go:883] updating cluster {Name:multinode-629100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.210.71 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 19:35:11.676193     716 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 19:35:11.685503     716 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 19:35:11.706563     716 docker.go:685] Got preloaded images: 
	I0910 19:35:11.706563     716 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0910 19:35:11.717078     716 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 19:35:11.733772     716 command_runner.go:139] > {"Repositories":{}}
	I0910 19:35:11.742438     716 ssh_runner.go:195] Run: which lz4
	I0910 19:35:11.747300     716 command_runner.go:130] > /usr/bin/lz4
	I0910 19:35:11.747300     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0910 19:35:11.757302     716 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 19:35:11.763275     716 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 19:35:11.763275     716 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 19:35:11.763464     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0910 19:35:13.121375     716 docker.go:649] duration metric: took 1.3721858s to copy over tarball
	I0910 19:35:13.130946     716 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 19:35:22.299744     716 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (9.1680002s)
	I0910 19:35:22.299818     716 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 19:35:22.354478     716 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 19:35:22.370845     716 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.15-0":"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a":"sha256:2e96e5913fc06e3d26915af3d0f
2ca5048cc4b6327e661e80da792cbf8d8d9d4"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.31.0":"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf":"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.31.0":"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d":"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.31.0":"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe":"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941f
d0ce560a86b6494"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.31.0":"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808":"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.10":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136"}}}
	I0910 19:35:22.370845     716 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0910 19:35:22.410574     716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:35:22.580150     716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 19:35:25.828947     716 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.2484645s)
	I0910 19:35:25.835867     716 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 19:35:25.860660     716 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 19:35:25.860660     716 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.0
	I0910 19:35:25.860660     716 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.0
	I0910 19:35:25.860660     716 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.0
	I0910 19:35:25.861685     716 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0910 19:35:25.861685     716 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0910 19:35:25.861685     716 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0910 19:35:25.861685     716 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 19:35:25.863236     716 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0910 19:35:25.863324     716 cache_images.go:84] Images are preloaded, skipping loading
	I0910 19:35:25.863324     716 kubeadm.go:934] updating node { 172.31.210.71 8443 v1.31.0 docker true true} ...
	I0910 19:35:25.863513     716 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-629100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.210.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 19:35:25.869706     716 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0910 19:35:25.932046     716 command_runner.go:130] > cgroupfs
	I0910 19:35:25.932338     716 cni.go:84] Creating CNI manager for ""
	I0910 19:35:25.932338     716 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0910 19:35:25.932338     716 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 19:35:25.932338     716 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.31.210.71 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-629100 NodeName:multinode-629100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.31.210.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.31.210.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 19:35:25.932937     716 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.31.210.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-629100"
	  kubeletExtraArgs:
	    node-ip: 172.31.210.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.31.210.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 19:35:25.945537     716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 19:35:25.966728     716 command_runner.go:130] > kubeadm
	I0910 19:35:25.966728     716 command_runner.go:130] > kubectl
	I0910 19:35:25.966728     716 command_runner.go:130] > kubelet
	I0910 19:35:25.966728     716 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 19:35:25.974985     716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 19:35:25.992066     716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0910 19:35:26.023687     716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 19:35:26.051884     716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0910 19:35:26.090824     716 ssh_runner.go:195] Run: grep 172.31.210.71	control-plane.minikube.internal$ /etc/hosts
	I0910 19:35:26.096822     716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.210.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:35:26.127144     716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:35:26.299973     716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:35:26.325871     716 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100 for IP: 172.31.210.71
	I0910 19:35:26.325936     716 certs.go:194] generating shared ca certs ...
	I0910 19:35:26.325936     716 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:35:26.326649     716 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0910 19:35:26.327027     716 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0910 19:35:26.327157     716 certs.go:256] generating profile certs ...
	I0910 19:35:26.327841     716 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\client.key
	I0910 19:35:26.327919     716 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\client.crt with IP's: []
	I0910 19:35:26.536260     716 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\client.crt ...
	I0910 19:35:26.536260     716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\client.crt: {Name:mk0868ede1a6f5789d3d74544225f0aafffdd0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:35:26.537121     716 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\client.key ...
	I0910 19:35:26.537121     716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\client.key: {Name:mk3e534e21e00d6ce006190243b95c0830dbaa13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:35:26.538106     716 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.key.f4040a10
	I0910 19:35:26.538106     716 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.crt.f4040a10 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.210.71]
	I0910 19:35:26.663592     716 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.crt.f4040a10 ...
	I0910 19:35:26.663592     716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.crt.f4040a10: {Name:mk43af5336d5486c529c544c3cb495d6481f7e04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:35:26.664871     716 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.key.f4040a10 ...
	I0910 19:35:26.664871     716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.key.f4040a10: {Name:mk3556bf9dda70dfdb0217685d7c65555cb4b25a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:35:26.666867     716 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.crt.f4040a10 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.crt
	I0910 19:35:26.680860     716 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.key.f4040a10 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.key
	I0910 19:35:26.682271     716 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\proxy-client.key
	I0910 19:35:26.682271     716 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\proxy-client.crt with IP's: []
	I0910 19:35:26.857335     716 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\proxy-client.crt ...
	I0910 19:35:26.857335     716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\proxy-client.crt: {Name:mk372ba08031d999e1ec63db7c8cb09871a3063d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:35:26.857924     716 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\proxy-client.key ...
	I0910 19:35:26.857924     716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\proxy-client.key: {Name:mkff2c52d7705969e6f8fb836f19bf92ef51e537 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:35:26.858693     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 19:35:26.859824     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0910 19:35:26.859997     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 19:35:26.860092     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 19:35:26.860193     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 19:35:26.860314     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 19:35:26.860363     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 19:35:26.872581     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 19:35:26.873245     716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem (1338 bytes)
	W0910 19:35:26.873614     716 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724_empty.pem, impossibly tiny 0 bytes
	I0910 19:35:26.873614     716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0910 19:35:26.874032     716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0910 19:35:26.874260     716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0910 19:35:26.874489     716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0910 19:35:26.874708     716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem (1708 bytes)
	I0910 19:35:26.875171     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /usr/share/ca-certificates/47242.pem
	I0910 19:35:26.875171     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:35:26.875390     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem -> /usr/share/ca-certificates/4724.pem
	I0910 19:35:26.877535     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 19:35:26.922279     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0910 19:35:26.962634     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 19:35:27.006420     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 19:35:27.046424     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0910 19:35:27.094427     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 19:35:27.131429     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 19:35:27.179446     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 19:35:27.222601     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /usr/share/ca-certificates/47242.pem (1708 bytes)
	I0910 19:35:27.262664     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 19:35:27.301793     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem --> /usr/share/ca-certificates/4724.pem (1338 bytes)
	I0910 19:35:27.340605     716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 19:35:27.377084     716 ssh_runner.go:195] Run: openssl version
	I0910 19:35:27.383391     716 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0910 19:35:27.392391     716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4724.pem && ln -fs /usr/share/ca-certificates/4724.pem /etc/ssl/certs/4724.pem"
	I0910 19:35:27.420596     716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4724.pem
	I0910 19:35:27.426596     716 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 19:35:27.426596     716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 19:35:27.434565     716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4724.pem
	I0910 19:35:27.441620     716 command_runner.go:130] > 51391683
	I0910 19:35:27.448621     716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4724.pem /etc/ssl/certs/51391683.0"
	I0910 19:35:27.477163     716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47242.pem && ln -fs /usr/share/ca-certificates/47242.pem /etc/ssl/certs/47242.pem"
	I0910 19:35:27.503046     716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47242.pem
	I0910 19:35:27.508760     716 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 19:35:27.508760     716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 19:35:27.517011     716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47242.pem
	I0910 19:35:27.525024     716 command_runner.go:130] > 3ec20f2e
	I0910 19:35:27.533331     716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47242.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 19:35:27.561985     716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 19:35:27.589268     716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:35:27.596906     716 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:35:27.596906     716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:35:27.605170     716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:35:27.612042     716 command_runner.go:130] > b5213941
	I0910 19:35:27.621014     716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 19:35:27.647286     716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 19:35:27.652164     716 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 19:35:27.653551     716 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 19:35:27.654067     716 kubeadm.go:392] StartCluster: {Name:multinode-629100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.210.71 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:35:27.660922     716 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 19:35:27.700783     716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 19:35:27.717041     716 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0910 19:35:27.717041     716 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0910 19:35:27.717041     716 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0910 19:35:27.728550     716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:35:27.754297     716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:35:27.773924     716 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0910 19:35:27.774917     716 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0910 19:35:27.774917     716 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0910 19:35:27.774917     716 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:35:27.774917     716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:35:27.774917     716 kubeadm.go:157] found existing configuration files:
	
	I0910 19:35:27.781931     716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:35:27.794926     716 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:35:27.795752     716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:35:27.803538     716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:35:27.825976     716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:35:27.842135     716 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:35:27.842277     716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:35:27.856979     716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:35:27.883624     716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:35:27.902313     716 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:35:27.902313     716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:35:27.909566     716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:35:27.934626     716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:35:27.950633     716 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:35:27.951306     716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:35:27.958545     716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:35:27.975176     716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:35:28.175029     716 command_runner.go:130] ! W0910 19:35:28.398634    1756 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:35:28.175376     716 kubeadm.go:310] W0910 19:35:28.398634    1756 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:35:28.176326     716 kubeadm.go:310] W0910 19:35:28.399423    1756 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:35:28.176326     716 command_runner.go:130] ! W0910 19:35:28.399423    1756 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:35:28.306235     716 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:35:28.306860     716 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:35:41.381288     716 command_runner.go:130] > [init] Using Kubernetes version: v1.31.0
	I0910 19:35:41.381475     716 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 19:35:41.381772     716 command_runner.go:130] > [preflight] Running pre-flight checks
	I0910 19:35:41.381772     716 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:35:41.381956     716 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:35:41.381956     716 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:35:41.382180     716 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:35:41.382298     716 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:35:41.382298     716 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 19:35:41.382298     716 command_runner.go:130] > [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 19:35:41.382910     716 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:35:41.382994     716 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:35:41.386837     716 out.go:235]   - Generating certificates and keys ...
	I0910 19:35:41.386837     716 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:35:41.386837     716 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0910 19:35:41.386837     716 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:35:41.386837     716 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0910 19:35:41.387450     716 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 19:35:41.387474     716 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 19:35:41.387474     716 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0910 19:35:41.387474     716 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0910 19:35:41.387474     716 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0910 19:35:41.387474     716 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0910 19:35:41.387474     716 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0910 19:35:41.387474     716 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0910 19:35:41.388010     716 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0910 19:35:41.388010     716 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0910 19:35:41.388176     716 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-629100] and IPs [172.31.210.71 127.0.0.1 ::1]
	I0910 19:35:41.388176     716 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-629100] and IPs [172.31.210.71 127.0.0.1 ::1]
	I0910 19:35:41.388176     716 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0910 19:35:41.388176     716 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0910 19:35:41.388176     716 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-629100] and IPs [172.31.210.71 127.0.0.1 ::1]
	I0910 19:35:41.388176     716 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-629100] and IPs [172.31.210.71 127.0.0.1 ::1]
	I0910 19:35:41.388176     716 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 19:35:41.388176     716 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 19:35:41.388176     716 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 19:35:41.388176     716 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 19:35:41.389008     716 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0910 19:35:41.389089     716 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0910 19:35:41.389089     716 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:35:41.389089     716 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:35:41.389089     716 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:35:41.389089     716 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:35:41.389089     716 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 19:35:41.389089     716 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 19:35:41.389089     716 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:35:41.389089     716 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:35:41.389700     716 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:35:41.389700     716 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:35:41.389700     716 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:35:41.389700     716 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:35:41.389700     716 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:35:41.389700     716 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:35:41.389700     716 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:35:41.389700     716 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:35:41.392609     716 out.go:235]   - Booting up control plane ...
	I0910 19:35:41.392609     716 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:35:41.392609     716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:35:41.392609     716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:35:41.392609     716 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:35:41.393616     716 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:35:41.393616     716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:35:41.393616     716 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:35:41.393616     716 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:35:41.393616     716 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:35:41.393616     716 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:35:41.393616     716 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0910 19:35:41.393616     716 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 19:35:41.393616     716 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 19:35:41.393616     716 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 19:35:41.394803     716 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 19:35:41.394829     716 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 19:35:41.394921     716 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.502199087s
	I0910 19:35:41.394921     716 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.502199087s
	I0910 19:35:41.394921     716 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 19:35:41.394921     716 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 19:35:41.394921     716 kubeadm.go:310] [api-check] The API server is healthy after 6.002611875s
	I0910 19:35:41.394921     716 command_runner.go:130] > [api-check] The API server is healthy after 6.002611875s
	I0910 19:35:41.395479     716 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 19:35:41.395479     716 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 19:35:41.395479     716 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 19:35:41.395479     716 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 19:35:41.395479     716 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0910 19:35:41.395479     716 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 19:35:41.395479     716 kubeadm.go:310] [mark-control-plane] Marking the node multinode-629100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 19:35:41.395479     716 command_runner.go:130] > [mark-control-plane] Marking the node multinode-629100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 19:35:41.395479     716 kubeadm.go:310] [bootstrap-token] Using token: 08qnpt.kj1chux6p793l65r
	I0910 19:35:41.395479     716 command_runner.go:130] > [bootstrap-token] Using token: 08qnpt.kj1chux6p793l65r
	I0910 19:35:41.399484     716 out.go:235]   - Configuring RBAC rules ...
	I0910 19:35:41.399484     716 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 19:35:41.399484     716 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 19:35:41.399484     716 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 19:35:41.399484     716 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 19:35:41.399484     716 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 19:35:41.399484     716 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 19:35:41.400480     716 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 19:35:41.400480     716 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 19:35:41.400480     716 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 19:35:41.400480     716 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 19:35:41.400480     716 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 19:35:41.400480     716 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 19:35:41.400480     716 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 19:35:41.400480     716 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 19:35:41.400480     716 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0910 19:35:41.400480     716 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 19:35:41.400480     716 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 19:35:41.400480     716 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0910 19:35:41.400480     716 kubeadm.go:310] 
	I0910 19:35:41.401477     716 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 19:35:41.401477     716 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0910 19:35:41.401477     716 kubeadm.go:310] 
	I0910 19:35:41.401477     716 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0910 19:35:41.401477     716 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 19:35:41.401477     716 kubeadm.go:310] 
	I0910 19:35:41.401477     716 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 19:35:41.401477     716 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0910 19:35:41.401477     716 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 19:35:41.401477     716 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 19:35:41.401477     716 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 19:35:41.401477     716 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 19:35:41.401477     716 kubeadm.go:310] 
	I0910 19:35:41.401477     716 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 19:35:41.401477     716 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0910 19:35:41.401477     716 kubeadm.go:310] 
	I0910 19:35:41.401477     716 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 19:35:41.401477     716 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 19:35:41.401477     716 kubeadm.go:310] 
	I0910 19:35:41.401477     716 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0910 19:35:41.401477     716 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 19:35:41.402487     716 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 19:35:41.402487     716 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 19:35:41.402487     716 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 19:35:41.402487     716 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 19:35:41.402487     716 kubeadm.go:310] 
	I0910 19:35:41.402487     716 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0910 19:35:41.402487     716 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 19:35:41.402487     716 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 19:35:41.402487     716 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0910 19:35:41.402487     716 kubeadm.go:310] 
	I0910 19:35:41.402487     716 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 08qnpt.kj1chux6p793l65r \
	I0910 19:35:41.402487     716 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 08qnpt.kj1chux6p793l65r \
	I0910 19:35:41.402487     716 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b \
	I0910 19:35:41.402487     716 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b \
	I0910 19:35:41.403485     716 kubeadm.go:310] 	--control-plane 
	I0910 19:35:41.403485     716 command_runner.go:130] > 	--control-plane 
	I0910 19:35:41.403485     716 kubeadm.go:310] 
	I0910 19:35:41.403682     716 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0910 19:35:41.403682     716 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 19:35:41.403682     716 kubeadm.go:310] 
	I0910 19:35:41.403858     716 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 08qnpt.kj1chux6p793l65r \
	I0910 19:35:41.403858     716 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 08qnpt.kj1chux6p793l65r \
	I0910 19:35:41.403989     716 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b 
	I0910 19:35:41.403989     716 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b 
	I0910 19:35:41.403989     716 cni.go:84] Creating CNI manager for ""
	I0910 19:35:41.403989     716 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0910 19:35:41.406596     716 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0910 19:35:41.418592     716 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0910 19:35:41.426195     716 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0910 19:35:41.426195     716 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0910 19:35:41.426195     716 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0910 19:35:41.426195     716 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0910 19:35:41.426195     716 command_runner.go:130] > Access: 2024-09-10 19:33:56.694922700 +0000
	I0910 19:35:41.426195     716 command_runner.go:130] > Modify: 2024-09-10 02:48:06.000000000 +0000
	I0910 19:35:41.426195     716 command_runner.go:130] > Change: 2024-09-10 19:33:47.584000000 +0000
	I0910 19:35:41.426195     716 command_runner.go:130] >  Birth: -
	I0910 19:35:41.426318     716 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0910 19:35:41.426389     716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0910 19:35:41.469283     716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0910 19:35:41.970446     716 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0910 19:35:41.985146     716 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0910 19:35:42.009762     716 command_runner.go:130] > serviceaccount/kindnet created
	I0910 19:35:42.037360     716 command_runner.go:130] > daemonset.apps/kindnet created
	I0910 19:35:42.040352     716 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 19:35:42.053599     716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:35:42.053599     716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-629100 minikube.k8s.io/updated_at=2024_09_10T19_35_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=multinode-629100 minikube.k8s.io/primary=true
	I0910 19:35:42.069057     716 command_runner.go:130] > -16
	I0910 19:35:42.069057     716 ops.go:34] apiserver oom_adj: -16
	I0910 19:35:42.237370     716 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0910 19:35:42.240800     716 command_runner.go:130] > node/multinode-629100 labeled
	I0910 19:35:42.250063     716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:35:42.354519     716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0910 19:35:42.757054     716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:35:42.852070     716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0910 19:35:43.255673     716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:35:43.352318     716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0910 19:35:43.753469     716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:35:43.852271     716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0910 19:35:44.258509     716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:35:44.351428     716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0910 19:35:44.765884     716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:35:44.869542     716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0910 19:35:45.255225     716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:35:45.359508     716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0910 19:35:45.758853     716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:35:45.863044     716 command_runner.go:130] > NAME      SECRETS   AGE
	I0910 19:35:45.863737     716 command_runner.go:130] > default   0         0s
	I0910 19:35:45.863788     716 kubeadm.go:1113] duration metric: took 3.8222498s to wait for elevateKubeSystemPrivileges
	I0910 19:35:45.863788     716 kubeadm.go:394] duration metric: took 18.2085029s to StartCluster
	I0910 19:35:45.863788     716 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:35:45.864075     716 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 19:35:45.865956     716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:35:45.868003     716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0910 19:35:45.868210     716 start.go:235] Will wait 6m0s for node &{Name: IP:172.31.210.71 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 19:35:45.868210     716 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 19:35:45.868365     716 addons.go:69] Setting storage-provisioner=true in profile "multinode-629100"
	I0910 19:35:45.868365     716 addons.go:69] Setting default-storageclass=true in profile "multinode-629100"
	I0910 19:35:45.868572     716 addons.go:234] Setting addon storage-provisioner=true in "multinode-629100"
	I0910 19:35:45.868796     716 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-629100"
	I0910 19:35:45.868796     716 host.go:66] Checking if "multinode-629100" exists ...
	I0910 19:35:45.869211     716 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:35:45.869926     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:35:45.870540     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:35:45.872958     716 out.go:177] * Verifying Kubernetes components...
	I0910 19:35:45.886645     716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:35:46.102193     716 command_runner.go:130] > apiVersion: v1
	I0910 19:35:46.102193     716 command_runner.go:130] > data:
	I0910 19:35:46.102193     716 command_runner.go:130] >   Corefile: |
	I0910 19:35:46.102193     716 command_runner.go:130] >     .:53 {
	I0910 19:35:46.102193     716 command_runner.go:130] >         errors
	I0910 19:35:46.102193     716 command_runner.go:130] >         health {
	I0910 19:35:46.102193     716 command_runner.go:130] >            lameduck 5s
	I0910 19:35:46.102193     716 command_runner.go:130] >         }
	I0910 19:35:46.102193     716 command_runner.go:130] >         ready
	I0910 19:35:46.102193     716 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0910 19:35:46.102193     716 command_runner.go:130] >            pods insecure
	I0910 19:35:46.102193     716 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0910 19:35:46.102193     716 command_runner.go:130] >            ttl 30
	I0910 19:35:46.102193     716 command_runner.go:130] >         }
	I0910 19:35:46.102193     716 command_runner.go:130] >         prometheus :9153
	I0910 19:35:46.102193     716 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0910 19:35:46.102193     716 command_runner.go:130] >            max_concurrent 1000
	I0910 19:35:46.102193     716 command_runner.go:130] >         }
	I0910 19:35:46.102193     716 command_runner.go:130] >         cache 30
	I0910 19:35:46.102193     716 command_runner.go:130] >         loop
	I0910 19:35:46.102193     716 command_runner.go:130] >         reload
	I0910 19:35:46.102193     716 command_runner.go:130] >         loadbalance
	I0910 19:35:46.102193     716 command_runner.go:130] >     }
	I0910 19:35:46.102193     716 command_runner.go:130] > kind: ConfigMap
	I0910 19:35:46.102193     716 command_runner.go:130] > metadata:
	I0910 19:35:46.102193     716 command_runner.go:130] >   creationTimestamp: "2024-09-10T19:35:40Z"
	I0910 19:35:46.102193     716 command_runner.go:130] >   name: coredns
	I0910 19:35:46.102193     716 command_runner.go:130] >   namespace: kube-system
	I0910 19:35:46.102193     716 command_runner.go:130] >   resourceVersion: "224"
	I0910 19:35:46.102193     716 command_runner.go:130] >   uid: 4e3f9543-4c36-48e8-a2bf-d154b03320f5
	I0910 19:35:46.102916     716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.31.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0910 19:35:46.222239     716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:35:46.663760     716 command_runner.go:130] > configmap/coredns replaced
	I0910 19:35:46.663760     716 start.go:971] {"host.minikube.internal": 172.31.208.1} host record injected into CoreDNS's ConfigMap
	I0910 19:35:46.665101     716 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 19:35:46.665101     716 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 19:35:46.666129     716 kapi.go:59] client config for multinode-629100: &rest.Config{Host:"https://172.31.210.71:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 19:35:46.666402     716 kapi.go:59] client config for multinode-629100: &rest.Config{Host:"https://172.31.210.71:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 19:35:46.667431     716 cert_rotation.go:140] Starting client certificate rotation controller
	I0910 19:35:46.668075     716 node_ready.go:35] waiting up to 6m0s for node "multinode-629100" to be "Ready" ...
	I0910 19:35:46.668075     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:46.668075     716 round_trippers.go:469] Request Headers:
	I0910 19:35:46.668075     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:46.668075     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:46.668075     716 round_trippers.go:463] GET https://172.31.210.71:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0910 19:35:46.668075     716 round_trippers.go:469] Request Headers:
	I0910 19:35:46.668075     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:46.668075     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:46.684631     716 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0910 19:35:46.684631     716 round_trippers.go:577] Response Headers:
	I0910 19:35:46.684702     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:46.684702     716 round_trippers.go:580]     Content-Length: 291
	I0910 19:35:46.684702     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:46 GMT
	I0910 19:35:46.684702     716 round_trippers.go:580]     Audit-Id: 2a85776d-d858-418e-9590-189116390077
	I0910 19:35:46.684702     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:46.684702     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:46.684702     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:46.684795     716 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c65809bd-9828-42f9-915c-e5324adc95bf","resourceVersion":"344","creationTimestamp":"2024-09-10T19:35:41Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0910 19:35:46.685043     716 request.go:1351] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c65809bd-9828-42f9-915c-e5324adc95bf","resourceVersion":"344","creationTimestamp":"2024-09-10T19:35:41Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0910 19:35:46.685043     716 round_trippers.go:463] PUT https://172.31.210.71:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0910 19:35:46.685043     716 round_trippers.go:469] Request Headers:
	I0910 19:35:46.685043     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:46.685043     716 round_trippers.go:473]     Content-Type: application/json
	I0910 19:35:46.685043     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:46.685590     716 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0910 19:35:46.685671     716 round_trippers.go:577] Response Headers:
	I0910 19:35:46.685671     716 round_trippers.go:580]     Audit-Id: 56dcbc2f-1317-4b69-9622-ebcf00c8b554
	I0910 19:35:46.685722     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:46.685722     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:46.685722     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:46.685722     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:46.685782     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:46 GMT
	I0910 19:35:46.687122     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:46.698543     716 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0910 19:35:46.698543     716 round_trippers.go:577] Response Headers:
	I0910 19:35:46.698543     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:46 GMT
	I0910 19:35:46.698543     716 round_trippers.go:580]     Audit-Id: 8561561c-c780-4be5-ba91-86067680e07f
	I0910 19:35:46.698543     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:46.698543     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:46.698543     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:46.698543     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:46.699095     716 round_trippers.go:580]     Content-Length: 291
	I0910 19:35:46.699095     716 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c65809bd-9828-42f9-915c-e5324adc95bf","resourceVersion":"346","creationTimestamp":"2024-09-10T19:35:41Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0910 19:35:47.170314     716 round_trippers.go:463] GET https://172.31.210.71:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0910 19:35:47.170314     716 round_trippers.go:469] Request Headers:
	I0910 19:35:47.170314     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:47.170314     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:47.170314     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:47.170314     716 round_trippers.go:469] Request Headers:
	I0910 19:35:47.170314     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:47.170314     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:47.174021     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:35:47.174021     716 round_trippers.go:577] Response Headers:
	I0910 19:35:47.174021     716 round_trippers.go:580]     Content-Length: 291
	I0910 19:35:47.174021     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:47 GMT
	I0910 19:35:47.174021     716 round_trippers.go:580]     Audit-Id: c558cbe5-dac5-484c-b253-9ac0af944c84
	I0910 19:35:47.174021     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:47.174021     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:47.174021     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:47.174021     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:47.174021     716 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c65809bd-9828-42f9-915c-e5324adc95bf","resourceVersion":"356","creationTimestamp":"2024-09-10T19:35:41Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0910 19:35:47.175011     716 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-629100" context rescaled to 1 replicas
	I0910 19:35:47.175011     716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:35:47.175011     716 round_trippers.go:577] Response Headers:
	I0910 19:35:47.175011     716 round_trippers.go:580]     Audit-Id: 075d7385-acdb-4654-8020-273ab7750b28
	I0910 19:35:47.175011     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:47.175011     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:47.175011     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:47.175011     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:47.175011     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:47 GMT
	I0910 19:35:47.175011     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:47.676963     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:47.676963     716 round_trippers.go:469] Request Headers:
	I0910 19:35:47.676963     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:47.676963     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:47.679820     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:35:47.679820     716 round_trippers.go:577] Response Headers:
	I0910 19:35:47.679820     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:47.679820     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:47.679820     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:47.679820     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:47 GMT
	I0910 19:35:47.679820     716 round_trippers.go:580]     Audit-Id: 15ebfb91-657a-485d-a2ea-e9053d7601a8
	I0910 19:35:47.679820     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:47.681367     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:47.954427     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:35:47.955412     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:35:47.959436     716 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 19:35:47.959436     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:35:47.959436     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:35:47.960430     716 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 19:35:47.961450     716 kapi.go:59] client config for multinode-629100: &rest.Config{Host:"https://172.31.210.71:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 19:35:47.961450     716 addons.go:234] Setting addon default-storageclass=true in "multinode-629100"
	I0910 19:35:47.962433     716 host.go:66] Checking if "multinode-629100" exists ...
	I0910 19:35:47.962433     716 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:35:47.962433     716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 19:35:47.962433     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:35:47.962433     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:35:48.171164     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:48.171396     716 round_trippers.go:469] Request Headers:
	I0910 19:35:48.171396     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:48.171396     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:48.174584     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:35:48.174584     716 round_trippers.go:577] Response Headers:
	I0910 19:35:48.174584     716 round_trippers.go:580]     Audit-Id: 4e586354-fb7b-4fc7-a09f-4e909eee0ed5
	I0910 19:35:48.175006     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:48.175006     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:48.175006     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:48.175006     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:48.175006     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:48 GMT
	I0910 19:35:48.175471     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:48.683014     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:48.683175     716 round_trippers.go:469] Request Headers:
	I0910 19:35:48.683175     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:48.683175     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:48.686871     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:35:48.686947     716 round_trippers.go:577] Response Headers:
	I0910 19:35:48.686947     716 round_trippers.go:580]     Audit-Id: c28349b7-c877-4f3a-a982-fbf606550484
	I0910 19:35:48.686947     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:48.686947     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:48.686947     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:48.686947     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:48.686947     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:48 GMT
	I0910 19:35:48.687327     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:48.687743     716 node_ready.go:53] node "multinode-629100" has status "Ready":"False"
	I0910 19:35:49.176311     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:49.176311     716 round_trippers.go:469] Request Headers:
	I0910 19:35:49.176311     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:49.176311     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:49.178892     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:35:49.178892     716 round_trippers.go:577] Response Headers:
	I0910 19:35:49.178892     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:49.178892     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:49 GMT
	I0910 19:35:49.178892     716 round_trippers.go:580]     Audit-Id: 6ea8e02b-533a-40d9-943c-b04aafe16021
	I0910 19:35:49.178892     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:49.178892     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:49.178892     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:49.179894     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:49.671811     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:49.671875     716 round_trippers.go:469] Request Headers:
	I0910 19:35:49.671875     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:49.671875     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:49.675108     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:35:49.675108     716 round_trippers.go:577] Response Headers:
	I0910 19:35:49.675108     716 round_trippers.go:580]     Audit-Id: ae54833b-5af4-4b00-b091-4d1c7b4108c9
	I0910 19:35:49.675108     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:49.675108     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:49.675108     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:49.675108     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:49.675108     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:49 GMT
	I0910 19:35:49.675108     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:50.032902     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:35:50.032902     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:35:50.033081     716 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 19:35:50.033153     716 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 19:35:50.033153     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:35:50.050411     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:35:50.050411     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:35:50.050411     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:35:50.179594     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:50.179594     716 round_trippers.go:469] Request Headers:
	I0910 19:35:50.179594     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:50.179594     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:50.183367     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:35:50.183367     716 round_trippers.go:577] Response Headers:
	I0910 19:35:50.183439     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:50.183439     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:50.183439     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:50.183439     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:50.183505     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:50 GMT
	I0910 19:35:50.183505     716 round_trippers.go:580]     Audit-Id: 51d68585-c755-4f98-bbcc-cd71f941648d
	I0910 19:35:50.183734     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:50.673795     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:50.673875     716 round_trippers.go:469] Request Headers:
	I0910 19:35:50.673875     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:50.673875     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:50.680813     716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:35:50.680813     716 round_trippers.go:577] Response Headers:
	I0910 19:35:50.680813     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:50 GMT
	I0910 19:35:50.680813     716 round_trippers.go:580]     Audit-Id: 2b875aea-f202-4315-a4fb-b04004cfb46f
	I0910 19:35:50.680813     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:50.680813     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:50.680813     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:50.680813     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:50.681775     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:51.182825     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:51.183001     716 round_trippers.go:469] Request Headers:
	I0910 19:35:51.183001     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:51.183001     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:51.186388     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:35:51.186525     716 round_trippers.go:577] Response Headers:
	I0910 19:35:51.186525     716 round_trippers.go:580]     Audit-Id: fc190c22-e7cd-4989-8cf1-99ac978dc2c1
	I0910 19:35:51.186525     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:51.186525     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:51.186525     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:51.186525     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:51.186525     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:51 GMT
	I0910 19:35:51.186944     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:51.187387     716 node_ready.go:53] node "multinode-629100" has status "Ready":"False"
	I0910 19:35:51.673084     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:51.673084     716 round_trippers.go:469] Request Headers:
	I0910 19:35:51.673084     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:51.673084     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:51.675757     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:35:51.676237     716 round_trippers.go:577] Response Headers:
	I0910 19:35:51.676237     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:51 GMT
	I0910 19:35:51.676312     716 round_trippers.go:580]     Audit-Id: ad92d936-26cc-4797-948a-547dbedd70b9
	I0910 19:35:51.676312     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:51.676312     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:51.676312     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:51.676312     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:51.676312     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:52.063709     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:35:52.063709     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:35:52.063835     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:35:52.181032     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:52.181032     716 round_trippers.go:469] Request Headers:
	I0910 19:35:52.181032     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:52.181106     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:52.183843     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:35:52.183843     716 round_trippers.go:577] Response Headers:
	I0910 19:35:52.183843     716 round_trippers.go:580]     Audit-Id: b3cec5f3-4985-4619-9467-b6e8686398a2
	I0910 19:35:52.183843     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:52.184575     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:52.184575     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:52.184575     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:52.184575     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:52 GMT
	I0910 19:35:52.184917     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:52.463781     716 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:35:52.464282     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:35:52.464677     716 sshutil.go:53] new ssh client: &{IP:172.31.210.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 19:35:52.606367     716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:35:52.670630     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:52.670630     716 round_trippers.go:469] Request Headers:
	I0910 19:35:52.670630     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:52.670630     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:52.674256     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:35:52.674679     716 round_trippers.go:577] Response Headers:
	I0910 19:35:52.674679     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:52 GMT
	I0910 19:35:52.674774     716 round_trippers.go:580]     Audit-Id: 5437c82f-ba46-4258-b3a0-aa0d0df93dbe
	I0910 19:35:52.674774     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:52.674774     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:52.674838     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:52.674865     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:52.675092     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:53.182023     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:53.182108     716 round_trippers.go:469] Request Headers:
	I0910 19:35:53.182108     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:53.182108     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:53.199555     716 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0910 19:35:53.199555     716 round_trippers.go:577] Response Headers:
	I0910 19:35:53.199555     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:53 GMT
	I0910 19:35:53.199555     716 round_trippers.go:580]     Audit-Id: 48e98bf0-589d-4e84-9483-14d929057aa7
	I0910 19:35:53.199555     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:53.199555     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:53.199555     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:53.199555     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:53.200077     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:53.200514     716 node_ready.go:53] node "multinode-629100" has status "Ready":"False"
	I0910 19:35:53.674409     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:53.674409     716 round_trippers.go:469] Request Headers:
	I0910 19:35:53.674409     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:53.674409     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:53.680379     716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:35:53.680379     716 round_trippers.go:577] Response Headers:
	I0910 19:35:53.680379     716 round_trippers.go:580]     Audit-Id: 4731e43c-769a-43b7-ab45-2f1c30f544b1
	I0910 19:35:53.680379     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:53.680379     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:53.680379     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:53.680379     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:53.680379     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:53 GMT
	I0910 19:35:53.680379     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:53.710008     716 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0910 19:35:53.710882     716 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0910 19:35:53.710882     716 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0910 19:35:53.710882     716 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0910 19:35:53.710882     716 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0910 19:35:53.710882     716 command_runner.go:130] > pod/storage-provisioner created
	I0910 19:35:53.711031     716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1044408s)
	I0910 19:35:54.183487     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:54.183487     716 round_trippers.go:469] Request Headers:
	I0910 19:35:54.183487     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:54.183487     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:54.187046     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:35:54.187146     716 round_trippers.go:577] Response Headers:
	I0910 19:35:54.187146     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:54 GMT
	I0910 19:35:54.187146     716 round_trippers.go:580]     Audit-Id: 262c5a3a-af5a-4afe-afc4-bccaa30ff20f
	I0910 19:35:54.187146     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:54.187146     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:54.187146     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:54.187146     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:54.187255     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:54.352468     716 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:35:54.352525     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:35:54.352525     716 sshutil.go:53] new ssh client: &{IP:172.31.210.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 19:35:54.482233     716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 19:35:54.618556     716 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0910 19:35:54.618556     716 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0910 19:35:54.618556     716 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0910 19:35:54.618556     716 round_trippers.go:463] GET https://172.31.210.71:8443/apis/storage.k8s.io/v1/storageclasses
	I0910 19:35:54.618556     716 round_trippers.go:469] Request Headers:
	I0910 19:35:54.618556     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:54.618556     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:54.622560     716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:35:54.622560     716 round_trippers.go:577] Response Headers:
	I0910 19:35:54.622560     716 round_trippers.go:580]     Content-Length: 1273
	I0910 19:35:54.622560     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:54 GMT
	I0910 19:35:54.622560     716 round_trippers.go:580]     Audit-Id: 9b737106-6556-4e71-bd61-19020ea226bb
	I0910 19:35:54.622560     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:54.622560     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:54.622560     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:54.622560     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:54.622831     716 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"383"},"items":[{"metadata":{"name":"standard","uid":"97e36428-6f6e-4155-86c1-906b29b0d6ae","resourceVersion":"383","creationTimestamp":"2024-09-10T19:35:54Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-10T19:35:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0910 19:35:54.622911     716 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"97e36428-6f6e-4155-86c1-906b29b0d6ae","resourceVersion":"383","creationTimestamp":"2024-09-10T19:35:54Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-10T19:35:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0910 19:35:54.622911     716 round_trippers.go:463] PUT https://172.31.210.71:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0910 19:35:54.623451     716 round_trippers.go:469] Request Headers:
	I0910 19:35:54.623451     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:54.623451     716 round_trippers.go:473]     Content-Type: application/json
	I0910 19:35:54.623546     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:54.626144     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:35:54.626144     716 round_trippers.go:577] Response Headers:
	I0910 19:35:54.626144     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:54.626144     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:54.626144     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:54.626144     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:54.626144     716 round_trippers.go:580]     Content-Length: 1220
	I0910 19:35:54.626144     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:54 GMT
	I0910 19:35:54.626144     716 round_trippers.go:580]     Audit-Id: 382cbd9c-dab9-4860-861c-fa52e906ab43
	I0910 19:35:54.627153     716 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"97e36428-6f6e-4155-86c1-906b29b0d6ae","resourceVersion":"383","creationTimestamp":"2024-09-10T19:35:54Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-10T19:35:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0910 19:35:54.630172     716 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0910 19:35:54.633834     716 addons.go:510] duration metric: took 8.7651659s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0910 19:35:54.671811     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:54.671811     716 round_trippers.go:469] Request Headers:
	I0910 19:35:54.671811     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:54.671811     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:54.673794     716 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 19:35:54.673794     716 round_trippers.go:577] Response Headers:
	I0910 19:35:54.673794     716 round_trippers.go:580]     Audit-Id: 869d7d8c-b0b5-484a-8f7e-2fb18863de44
	I0910 19:35:54.673794     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:54.673794     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:54.673794     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:54.673794     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:54.673794     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:54 GMT
	I0910 19:35:54.673794     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:55.173971     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:55.173971     716 round_trippers.go:469] Request Headers:
	I0910 19:35:55.173971     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:55.173971     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:55.177528     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:35:55.177528     716 round_trippers.go:577] Response Headers:
	I0910 19:35:55.177528     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:55.177528     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:55 GMT
	I0910 19:35:55.177528     716 round_trippers.go:580]     Audit-Id: 872ee566-70cc-4e3f-8c10-5f64a2ecd43f
	I0910 19:35:55.177528     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:55.177528     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:55.177528     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:55.178034     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:55.674274     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:55.674363     716 round_trippers.go:469] Request Headers:
	I0910 19:35:55.674363     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:55.674363     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:55.677695     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:35:55.678512     716 round_trippers.go:577] Response Headers:
	I0910 19:35:55.678512     716 round_trippers.go:580]     Audit-Id: 8dcf1cbc-040b-4d80-8f4e-14826b60d8b3
	I0910 19:35:55.678512     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:55.678512     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:55.678512     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:55.678512     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:55.678512     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:55 GMT
	I0910 19:35:55.678512     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:55.679621     716 node_ready.go:53] node "multinode-629100" has status "Ready":"False"
	I0910 19:35:56.173573     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:56.173639     716 round_trippers.go:469] Request Headers:
	I0910 19:35:56.173639     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:56.173639     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:56.180612     716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:35:56.180612     716 round_trippers.go:577] Response Headers:
	I0910 19:35:56.180612     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:56.180612     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:56.180612     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:56.180612     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:56 GMT
	I0910 19:35:56.180612     716 round_trippers.go:580]     Audit-Id: a464621d-84d8-4297-9c2a-d8a4b61488ec
	I0910 19:35:56.180612     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:56.180612     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:56.671904     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:56.672350     716 round_trippers.go:469] Request Headers:
	I0910 19:35:56.672350     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:56.672350     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:56.676772     716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:35:56.676772     716 round_trippers.go:577] Response Headers:
	I0910 19:35:56.676772     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:56.676772     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:56 GMT
	I0910 19:35:56.676772     716 round_trippers.go:580]     Audit-Id: ac87833b-b120-45fc-afb6-c7bafe0d6a70
	I0910 19:35:56.676772     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:56.677741     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:56.677741     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:56.678156     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:57.170684     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:57.170684     716 round_trippers.go:469] Request Headers:
	I0910 19:35:57.170684     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:57.170684     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:57.174243     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:35:57.174243     716 round_trippers.go:577] Response Headers:
	I0910 19:35:57.174243     716 round_trippers.go:580]     Audit-Id: 81db3e7f-bc82-4d55-ba2d-848908cc22f7
	I0910 19:35:57.174243     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:57.174243     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:57.174243     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:57.174243     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:57.174243     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:57 GMT
	I0910 19:35:57.174831     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:57.671541     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:57.671628     716 round_trippers.go:469] Request Headers:
	I0910 19:35:57.671628     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:57.671628     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:57.675185     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:35:57.675185     716 round_trippers.go:577] Response Headers:
	I0910 19:35:57.675185     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:57.675185     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:57.675185     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:57 GMT
	I0910 19:35:57.675185     716 round_trippers.go:580]     Audit-Id: 29de3ef8-ea7a-400e-9160-1aba6d272eeb
	I0910 19:35:57.675185     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:57.675185     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:57.675185     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:58.184569     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:58.184682     716 round_trippers.go:469] Request Headers:
	I0910 19:35:58.184682     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:58.184682     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:58.188507     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:35:58.188507     716 round_trippers.go:577] Response Headers:
	I0910 19:35:58.188507     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:58 GMT
	I0910 19:35:58.188507     716 round_trippers.go:580]     Audit-Id: 6090a793-6328-48d2-b003-d590a8c7394d
	I0910 19:35:58.188507     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:58.188507     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:58.188692     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:58.188692     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:58.188991     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:58.189865     716 node_ready.go:53] node "multinode-629100" has status "Ready":"False"
	I0910 19:35:58.682955     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:58.683029     716 round_trippers.go:469] Request Headers:
	I0910 19:35:58.683029     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:58.683082     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:58.686786     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:35:58.686786     716 round_trippers.go:577] Response Headers:
	I0910 19:35:58.686786     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:58.686786     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:58.686786     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:58.686786     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:58.686786     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:58 GMT
	I0910 19:35:58.686786     716 round_trippers.go:580]     Audit-Id: 60c85d2a-0282-40e2-837a-ce981cda612e
	I0910 19:35:58.686786     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:59.182837     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:59.182935     716 round_trippers.go:469] Request Headers:
	I0910 19:35:59.182935     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:59.182935     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:59.189539     716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:35:59.189539     716 round_trippers.go:577] Response Headers:
	I0910 19:35:59.189539     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:59.189539     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:59 GMT
	I0910 19:35:59.189539     716 round_trippers.go:580]     Audit-Id: 2348a029-6259-466a-875e-b5bf7020b874
	I0910 19:35:59.189539     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:59.189539     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:59.189539     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:59.189539     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:35:59.682269     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:35:59.682269     716 round_trippers.go:469] Request Headers:
	I0910 19:35:59.682359     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:35:59.682359     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:35:59.685565     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:35:59.685565     716 round_trippers.go:577] Response Headers:
	I0910 19:35:59.685565     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:35:59 GMT
	I0910 19:35:59.685565     716 round_trippers.go:580]     Audit-Id: 6a4f7a70-a8ac-444c-afd3-87abf5c028cd
	I0910 19:35:59.685759     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:35:59.685759     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:35:59.685759     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:35:59.685759     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:35:59.685877     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:36:00.180739     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:00.180964     716 round_trippers.go:469] Request Headers:
	I0910 19:36:00.181044     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:00.181044     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:00.183763     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:36:00.183763     716 round_trippers.go:577] Response Headers:
	I0910 19:36:00.183763     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:00 GMT
	I0910 19:36:00.184797     716 round_trippers.go:580]     Audit-Id: ca123143-924d-4801-b274-f0292692ae92
	I0910 19:36:00.184797     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:00.184797     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:00.184797     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:00.184797     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:00.186746     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:36:00.680656     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:00.680730     716 round_trippers.go:469] Request Headers:
	I0910 19:36:00.680730     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:00.680730     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:00.683850     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:36:00.683850     716 round_trippers.go:577] Response Headers:
	I0910 19:36:00.683850     716 round_trippers.go:580]     Audit-Id: c4b413dd-41df-40a4-b4f9-857c598d7a21
	I0910 19:36:00.683850     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:00.683850     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:00.683850     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:00.683850     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:00.683850     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:00 GMT
	I0910 19:36:00.684602     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:36:00.685318     716 node_ready.go:53] node "multinode-629100" has status "Ready":"False"
	I0910 19:36:01.179818     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:01.179818     716 round_trippers.go:469] Request Headers:
	I0910 19:36:01.179818     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:01.179922     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:01.182573     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:36:01.182573     716 round_trippers.go:577] Response Headers:
	I0910 19:36:01.182573     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:01.182573     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:01.182573     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:01.182573     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:01.182573     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:01 GMT
	I0910 19:36:01.182573     716 round_trippers.go:580]     Audit-Id: 23d919ae-5721-47f1-be85-a80861367bb4
	I0910 19:36:01.183447     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:36:01.684271     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:01.684271     716 round_trippers.go:469] Request Headers:
	I0910 19:36:01.684271     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:01.684271     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:01.693802     716 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0910 19:36:01.693802     716 round_trippers.go:577] Response Headers:
	I0910 19:36:01.693802     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:01.693802     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:01.693802     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:01 GMT
	I0910 19:36:01.693802     716 round_trippers.go:580]     Audit-Id: b2bbf477-b1dc-4973-b23a-9f1cc7f586dd
	I0910 19:36:01.693802     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:01.693802     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:01.694654     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:36:02.180921     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:02.180996     716 round_trippers.go:469] Request Headers:
	I0910 19:36:02.180996     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:02.181051     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:02.184642     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:36:02.185075     716 round_trippers.go:577] Response Headers:
	I0910 19:36:02.185075     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:02.185075     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:02.185075     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:02.185075     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:02.185075     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:02 GMT
	I0910 19:36:02.185075     716 round_trippers.go:580]     Audit-Id: 61953394-3720-4540-b465-4d3a1b4b84e9
	I0910 19:36:02.185161     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:36:02.679210     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:02.679318     716 round_trippers.go:469] Request Headers:
	I0910 19:36:02.679318     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:02.679318     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:02.682608     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:36:02.682608     716 round_trippers.go:577] Response Headers:
	I0910 19:36:02.682608     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:02.682608     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:02.682608     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:02 GMT
	I0910 19:36:02.682608     716 round_trippers.go:580]     Audit-Id: c38d8c46-fbdf-4692-879b-c24d449e460f
	I0910 19:36:02.682608     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:02.682608     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:02.682608     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:36:03.178362     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:03.178362     716 round_trippers.go:469] Request Headers:
	I0910 19:36:03.178457     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:03.178457     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:03.180781     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:36:03.181799     716 round_trippers.go:577] Response Headers:
	I0910 19:36:03.181799     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:03 GMT
	I0910 19:36:03.181799     716 round_trippers.go:580]     Audit-Id: 0e2f6671-d433-4d5f-a13c-3a528adb6f86
	I0910 19:36:03.181799     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:03.181799     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:03.181799     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:03.181799     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:03.182200     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:36:03.182767     716 node_ready.go:53] node "multinode-629100" has status "Ready":"False"
	I0910 19:36:03.679667     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:03.679667     716 round_trippers.go:469] Request Headers:
	I0910 19:36:03.679667     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:03.679769     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:03.683810     716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:36:03.683810     716 round_trippers.go:577] Response Headers:
	I0910 19:36:03.683810     716 round_trippers.go:580]     Audit-Id: 61bb3834-a75f-46a7-9315-2249403371ae
	I0910 19:36:03.684207     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:03.684207     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:03.684207     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:03.684207     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:03.684207     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:03 GMT
	I0910 19:36:03.684301     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:36:04.177733     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:04.178110     716 round_trippers.go:469] Request Headers:
	I0910 19:36:04.178110     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:04.178110     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:04.183790     716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:36:04.183790     716 round_trippers.go:577] Response Headers:
	I0910 19:36:04.183790     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:04.183790     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:04.183790     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:04.183790     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:04.183790     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:04 GMT
	I0910 19:36:04.183790     716 round_trippers.go:580]     Audit-Id: d75fcad3-b210-4fb2-905d-e7da74d34ff4
	I0910 19:36:04.183790     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:36:04.675805     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:04.675805     716 round_trippers.go:469] Request Headers:
	I0910 19:36:04.675805     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:04.675805     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:04.679371     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:36:04.679371     716 round_trippers.go:577] Response Headers:
	I0910 19:36:04.679634     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:04.679634     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:04.679634     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:04.679634     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:04.679634     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:04 GMT
	I0910 19:36:04.679634     716 round_trippers.go:580]     Audit-Id: 6eab8352-0524-4a8b-beec-d544969fb8c3
	I0910 19:36:04.679634     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:36:05.174003     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:05.174082     716 round_trippers.go:469] Request Headers:
	I0910 19:36:05.174082     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:05.174082     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:05.180315     716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:36:05.180315     716 round_trippers.go:577] Response Headers:
	I0910 19:36:05.180315     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:05.180315     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:05.180315     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:05.180315     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:05 GMT
	I0910 19:36:05.180315     716 round_trippers.go:580]     Audit-Id: 42ef8377-0477-4f80-849b-87eea02b3e79
	I0910 19:36:05.180315     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:05.181282     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:36:05.672516     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:05.672605     716 round_trippers.go:469] Request Headers:
	I0910 19:36:05.672605     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:05.672725     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:05.676004     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:36:05.676004     716 round_trippers.go:577] Response Headers:
	I0910 19:36:05.676004     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:05 GMT
	I0910 19:36:05.676004     716 round_trippers.go:580]     Audit-Id: 3da1d062-588e-4d24-908d-c9ada0657aea
	I0910 19:36:05.676004     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:05.676004     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:05.676004     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:05.676004     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:05.676447     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:36:05.677267     716 node_ready.go:53] node "multinode-629100" has status "Ready":"False"
	I0910 19:36:06.183654     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:06.183752     716 round_trippers.go:469] Request Headers:
	I0910 19:36:06.183752     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:06.183752     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:06.187697     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:36:06.188148     716 round_trippers.go:577] Response Headers:
	I0910 19:36:06.188148     716 round_trippers.go:580]     Audit-Id: 644e33c6-6d08-4019-9ce4-c1e41927f8e7
	I0910 19:36:06.188148     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:06.188148     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:06.188148     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:06.188148     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:06.188148     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:06 GMT
	I0910 19:36:06.188250     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"317","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4920 chars]
	I0910 19:36:06.683437     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:06.683437     716 round_trippers.go:469] Request Headers:
	I0910 19:36:06.683437     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:06.683437     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:06.686010     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:36:06.686725     716 round_trippers.go:577] Response Headers:
	I0910 19:36:06.686725     716 round_trippers.go:580]     Audit-Id: 32097c5a-aff6-4f27-b343-b8be7903c090
	I0910 19:36:06.686725     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:06.686725     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:06.686725     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:06.686725     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:06.686840     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:06 GMT
	I0910 19:36:06.686911     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"391","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4775 chars]
	I0910 19:36:06.687642     716 node_ready.go:49] node "multinode-629100" has status "Ready":"True"
	I0910 19:36:06.687711     716 node_ready.go:38] duration metric: took 20.0182985s for node "multinode-629100" to be "Ready" ...
	I0910 19:36:06.687787     716 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:36:06.688002     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods
	I0910 19:36:06.688076     716 round_trippers.go:469] Request Headers:
	I0910 19:36:06.688076     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:06.688076     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:06.691520     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:36:06.691520     716 round_trippers.go:577] Response Headers:
	I0910 19:36:06.691520     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:06 GMT
	I0910 19:36:06.691520     716 round_trippers.go:580]     Audit-Id: e1d9465d-10ba-431d-9350-8df1069523c9
	I0910 19:36:06.691520     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:06.691520     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:06.691520     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:06.691520     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:06.695639     716 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"397"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"395","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57814 chars]
	I0910 19:36:06.700660     716 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace to be "Ready" ...
	I0910 19:36:06.700660     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:36:06.700660     716 round_trippers.go:469] Request Headers:
	I0910 19:36:06.700660     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:06.700660     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:06.703263     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:36:06.703263     716 round_trippers.go:577] Response Headers:
	I0910 19:36:06.703263     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:06 GMT
	I0910 19:36:06.703263     716 round_trippers.go:580]     Audit-Id: e908c17b-1a7f-4284-b44d-75e88d0155b5
	I0910 19:36:06.703263     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:06.703263     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:06.703263     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:06.703263     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:06.703848     716 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"395","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6703 chars]
	I0910 19:36:06.704449     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:06.704504     716 round_trippers.go:469] Request Headers:
	I0910 19:36:06.704504     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:06.704504     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:06.706853     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:36:06.706853     716 round_trippers.go:577] Response Headers:
	I0910 19:36:06.706853     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:06.706853     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:06.706853     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:06 GMT
	I0910 19:36:06.706853     716 round_trippers.go:580]     Audit-Id: 16328795-82bb-453d-9821-d6b85ff11411
	I0910 19:36:06.706853     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:06.706853     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:06.706853     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"391","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4775 chars]
	I0910 19:36:07.203458     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:36:07.203511     716 round_trippers.go:469] Request Headers:
	I0910 19:36:07.203511     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:07.203511     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:07.206691     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:36:07.206691     716 round_trippers.go:577] Response Headers:
	I0910 19:36:07.206691     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:07 GMT
	I0910 19:36:07.206691     716 round_trippers.go:580]     Audit-Id: 3fe8b03c-905b-463c-ae40-ac95a3e7808f
	I0910 19:36:07.206691     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:07.206691     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:07.206691     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:07.206691     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:07.206691     716 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"395","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6703 chars]
	I0910 19:36:07.208092     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:07.208092     716 round_trippers.go:469] Request Headers:
	I0910 19:36:07.208092     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:07.208092     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:07.210467     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:36:07.210467     716 round_trippers.go:577] Response Headers:
	I0910 19:36:07.210765     716 round_trippers.go:580]     Audit-Id: c1157033-526b-46e9-98e0-17128ca69a19
	I0910 19:36:07.210765     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:07.210765     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:07.210765     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:07.210765     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:07.210765     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:07 GMT
	I0910 19:36:07.210765     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"391","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4775 chars]
	I0910 19:36:07.713075     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:36:07.713075     716 round_trippers.go:469] Request Headers:
	I0910 19:36:07.713075     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:07.713075     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:07.723111     716 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0910 19:36:07.723111     716 round_trippers.go:577] Response Headers:
	I0910 19:36:07.723111     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:07.723111     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:07.723111     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:07.723111     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:07 GMT
	I0910 19:36:07.723111     716 round_trippers.go:580]     Audit-Id: e8df9b35-4475-411d-83cf-7171a3eab925
	I0910 19:36:07.723111     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:07.723622     716 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"395","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6703 chars]
	I0910 19:36:07.724575     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:07.724575     716 round_trippers.go:469] Request Headers:
	I0910 19:36:07.724608     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:07.724608     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:07.727193     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:36:07.727193     716 round_trippers.go:577] Response Headers:
	I0910 19:36:07.727602     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:07.727602     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:07.727602     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:07.727602     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:07.727602     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:07 GMT
	I0910 19:36:07.727602     716 round_trippers.go:580]     Audit-Id: 37623a00-9286-4040-833f-df78143a6bd2
	I0910 19:36:07.727683     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"391","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4775 chars]
	I0910 19:36:08.216023     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:36:08.216111     716 round_trippers.go:469] Request Headers:
	I0910 19:36:08.216111     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:08.216111     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:08.219495     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:36:08.219495     716 round_trippers.go:577] Response Headers:
	I0910 19:36:08.219495     716 round_trippers.go:580]     Audit-Id: d72fd28d-80c6-4448-bf16-9897c25820eb
	I0910 19:36:08.219495     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:08.219798     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:08.219798     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:08.219953     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:08.219981     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:08 GMT
	I0910 19:36:08.220410     716 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"405","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6834 chars]
	I0910 19:36:08.221465     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:08.221549     716 round_trippers.go:469] Request Headers:
	I0910 19:36:08.221549     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:08.221549     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:08.223891     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:36:08.223891     716 round_trippers.go:577] Response Headers:
	I0910 19:36:08.223891     716 round_trippers.go:580]     Audit-Id: 087618aa-c47b-4b56-b46e-380207e2a211
	I0910 19:36:08.223891     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:08.224207     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:08.224207     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:08.224207     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:08.224207     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:08 GMT
	I0910 19:36:08.224417     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"391","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4775 chars]
	I0910 19:36:08.225084     716 pod_ready.go:93] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"True"
	I0910 19:36:08.225084     716 pod_ready.go:82] duration metric: took 1.5243221s for pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace to be "Ready" ...
	I0910 19:36:08.225160     716 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:36:08.225242     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-629100
	I0910 19:36:08.225321     716 round_trippers.go:469] Request Headers:
	I0910 19:36:08.225321     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:08.225398     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:08.228541     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:36:08.228541     716 round_trippers.go:577] Response Headers:
	I0910 19:36:08.228541     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:08.228541     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:08 GMT
	I0910 19:36:08.228541     716 round_trippers.go:580]     Audit-Id: 32f42b0c-8031-477a-b3fa-b83767e3a9ed
	I0910 19:36:08.228541     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:08.228541     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:08.228541     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:08.229368     716 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-629100","namespace":"kube-system","uid":"9df7e62a-50ad-4d7e-97eb-e9c494a0892b","resourceVersion":"369","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.210.71:2379","kubernetes.io/config.hash":"0ee82e84ae3f2eed59191657b2917fe8","kubernetes.io/config.mirror":"0ee82e84ae3f2eed59191657b2917fe8","kubernetes.io/config.seen":"2024-09-10T19:35:40.972003382Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6465 chars]
	I0910 19:36:08.229991     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:08.229991     716 round_trippers.go:469] Request Headers:
	I0910 19:36:08.229991     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:08.229991     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:08.232552     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:36:08.232552     716 round_trippers.go:577] Response Headers:
	I0910 19:36:08.232552     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:08.232552     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:08 GMT
	I0910 19:36:08.232552     716 round_trippers.go:580]     Audit-Id: 489d9415-6ae6-4207-a039-92ebee17113b
	I0910 19:36:08.232552     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:08.232552     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:08.232552     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:08.232552     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"391","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4775 chars]
	I0910 19:36:08.233550     716 pod_ready.go:93] pod "etcd-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:36:08.233550     716 pod_ready.go:82] duration metric: took 8.3899ms for pod "etcd-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:36:08.233550     716 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:36:08.233550     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-629100
	I0910 19:36:08.233550     716 round_trippers.go:469] Request Headers:
	I0910 19:36:08.233550     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:08.233550     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:08.236536     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:36:08.236536     716 round_trippers.go:577] Response Headers:
	I0910 19:36:08.236536     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:08.236536     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:08.236640     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:08 GMT
	I0910 19:36:08.236640     716 round_trippers.go:580]     Audit-Id: 4c70e9cc-8cb9-4368-8712-92b66806c3ef
	I0910 19:36:08.236640     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:08.236640     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:08.236640     716 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-629100","namespace":"kube-system","uid":"8dcf96e7-f1c8-4b97-b0a8-e4b79bd7566c","resourceVersion":"357","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.210.71:8443","kubernetes.io/config.hash":"2100bf04dc545540399a042d07adc1da","kubernetes.io/config.mirror":"2100bf04dc545540399a042d07adc1da","kubernetes.io/config.seen":"2024-09-10T19:35:40.972007582Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0910 19:36:08.237375     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:08.237375     716 round_trippers.go:469] Request Headers:
	I0910 19:36:08.237375     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:08.237375     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:08.239941     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:36:08.240129     716 round_trippers.go:577] Response Headers:
	I0910 19:36:08.240129     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:08.240129     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:08.240129     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:08.240129     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:08 GMT
	I0910 19:36:08.240129     716 round_trippers.go:580]     Audit-Id: f1e78136-6e16-454a-92e4-4e8f04f59f94
	I0910 19:36:08.240129     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:08.240129     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"391","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4775 chars]
	I0910 19:36:08.240700     716 pod_ready.go:93] pod "kube-apiserver-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:36:08.240753     716 pod_ready.go:82] duration metric: took 7.2026ms for pod "kube-apiserver-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:36:08.240792     716 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:36:08.240901     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-629100
	I0910 19:36:08.240901     716 round_trippers.go:469] Request Headers:
	I0910 19:36:08.240901     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:08.240901     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:08.242611     716 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 19:36:08.242611     716 round_trippers.go:577] Response Headers:
	I0910 19:36:08.242611     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:08.242611     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:08.242611     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:08.242611     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:08 GMT
	I0910 19:36:08.242611     716 round_trippers.go:580]     Audit-Id: 7617c86c-6093-498b-b0fb-682472bb7ead
	I0910 19:36:08.242611     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:08.242611     716 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-629100","namespace":"kube-system","uid":"60adcb0c-808a-477c-9432-83dc8f96d6c0","resourceVersion":"310","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2a7614069b6cbf30e1abf0d1e34a8a71","kubernetes.io/config.mirror":"2a7614069b6cbf30e1abf0d1e34a8a71","kubernetes.io/config.seen":"2024-09-10T19:35:40.972009282Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0910 19:36:08.243617     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:08.243617     716 round_trippers.go:469] Request Headers:
	I0910 19:36:08.243617     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:08.243617     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:08.246510     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:36:08.246604     716 round_trippers.go:577] Response Headers:
	I0910 19:36:08.246604     716 round_trippers.go:580]     Audit-Id: f7ef2e76-cb98-4bb8-a317-cd982ebb5380
	I0910 19:36:08.246604     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:08.246639     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:08.246639     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:08.246639     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:08.246639     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:08 GMT
	I0910 19:36:08.246832     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"391","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4775 chars]
	I0910 19:36:08.246832     716 pod_ready.go:93] pod "kube-controller-manager-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:36:08.246832     716 pod_ready.go:82] duration metric: took 6.0392ms for pod "kube-controller-manager-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:36:08.246832     716 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wqf2d" in "kube-system" namespace to be "Ready" ...
	I0910 19:36:08.246832     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wqf2d
	I0910 19:36:08.246832     716 round_trippers.go:469] Request Headers:
	I0910 19:36:08.247355     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:08.247407     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:08.249573     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:36:08.249573     716 round_trippers.go:577] Response Headers:
	I0910 19:36:08.249573     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:08 GMT
	I0910 19:36:08.249573     716 round_trippers.go:580]     Audit-Id: 2cdae4af-a037-4a42-864f-a0649eeb39ba
	I0910 19:36:08.249573     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:08.249573     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:08.249573     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:08.249573     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:08.250097     716 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wqf2d","generateName":"kube-proxy-","namespace":"kube-system","uid":"27e7846e-e506-48e0-96b9-351b3ca91703","resourceVersion":"362","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6194 chars]
	I0910 19:36:08.294907     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:08.295121     716 round_trippers.go:469] Request Headers:
	I0910 19:36:08.295121     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:08.295121     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:08.297868     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:36:08.297868     716 round_trippers.go:577] Response Headers:
	I0910 19:36:08.297868     716 round_trippers.go:580]     Audit-Id: b8cabe0f-ca54-46b6-94f7-25cfd6246c07
	I0910 19:36:08.297868     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:08.297868     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:08.297868     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:08.297868     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:08.297868     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:08 GMT
	I0910 19:36:08.298299     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"391","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4775 chars]
	I0910 19:36:08.298706     716 pod_ready.go:93] pod "kube-proxy-wqf2d" in "kube-system" namespace has status "Ready":"True"
	I0910 19:36:08.298706     716 pod_ready.go:82] duration metric: took 51.8706ms for pod "kube-proxy-wqf2d" in "kube-system" namespace to be "Ready" ...
	I0910 19:36:08.298706     716 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:36:08.484217     716 request.go:632] Waited for 185.3976ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629100
	I0910 19:36:08.484324     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629100
	I0910 19:36:08.484324     716 round_trippers.go:469] Request Headers:
	I0910 19:36:08.484403     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:08.484403     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:08.486988     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:36:08.486988     716 round_trippers.go:577] Response Headers:
	I0910 19:36:08.486988     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:08.487717     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:08.487717     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:08.487717     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:08.487717     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:08 GMT
	I0910 19:36:08.487717     716 round_trippers.go:580]     Audit-Id: e999d420-8372-4f0a-b72f-a9eba68d62e1
	I0910 19:36:08.488020     716 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-629100","namespace":"kube-system","uid":"fe93d1a3-98c4-44c9-b1fb-9d90a97b97c7","resourceVersion":"371","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4f54b0774708cd0189f8ef861ea931f0","kubernetes.io/config.mirror":"4f54b0774708cd0189f8ef861ea931f0","kubernetes.io/config.seen":"2024-09-10T19:35:40.972010383Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0910 19:36:08.685718     716 request.go:632] Waited for 196.8048ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:08.685718     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:36:08.685718     716 round_trippers.go:469] Request Headers:
	I0910 19:36:08.685718     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:08.685718     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:08.689326     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:36:08.689523     716 round_trippers.go:577] Response Headers:
	I0910 19:36:08.689523     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:08.689523     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:08.689523     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:08.689523     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:08.689523     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:08 GMT
	I0910 19:36:08.689523     716 round_trippers.go:580]     Audit-Id: 103cedac-c91b-49d0-b992-c15aa6d567e5
	I0910 19:36:08.689628     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"391","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4775 chars]
	I0910 19:36:08.690179     716 pod_ready.go:93] pod "kube-scheduler-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:36:08.690291     716 pod_ready.go:82] duration metric: took 391.5593ms for pod "kube-scheduler-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:36:08.690291     716 pod_ready.go:39] duration metric: took 2.0023038s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:36:08.690392     716 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:36:08.699908     716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:36:08.726886     716 command_runner.go:130] > 2145
	I0910 19:36:08.726952     716 api_server.go:72] duration metric: took 22.8571295s to wait for apiserver process to appear ...
	I0910 19:36:08.726952     716 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:36:08.726952     716 api_server.go:253] Checking apiserver healthz at https://172.31.210.71:8443/healthz ...
	I0910 19:36:08.735441     716 api_server.go:279] https://172.31.210.71:8443/healthz returned 200:
	ok
	I0910 19:36:08.736407     716 round_trippers.go:463] GET https://172.31.210.71:8443/version
	I0910 19:36:08.736559     716 round_trippers.go:469] Request Headers:
	I0910 19:36:08.736559     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:08.736590     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:08.737205     716 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0910 19:36:08.737825     716 round_trippers.go:577] Response Headers:
	I0910 19:36:08.737825     716 round_trippers.go:580]     Audit-Id: ee70ba7e-e47f-472b-9038-04b8977f04de
	I0910 19:36:08.737825     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:08.737825     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:08.737825     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:08.737825     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:08.737825     716 round_trippers.go:580]     Content-Length: 263
	I0910 19:36:08.737825     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:08 GMT
	I0910 19:36:08.737825     716 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.0",
	  "gitCommit": "9edcffcde5595e8a5b1a35f88c421764e575afce",
	  "gitTreeState": "clean",
	  "buildDate": "2024-08-13T07:28:49Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0910 19:36:08.737825     716 api_server.go:141] control plane version: v1.31.0
	I0910 19:36:08.737825     716 api_server.go:131] duration metric: took 10.8717ms to wait for apiserver health ...
	I0910 19:36:08.737825     716 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:36:08.888366     716 request.go:632] Waited for 150.4688ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods
	I0910 19:36:08.888366     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods
	I0910 19:36:08.888366     716 round_trippers.go:469] Request Headers:
	I0910 19:36:08.888366     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:08.888366     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:08.895027     716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:36:08.895027     716 round_trippers.go:577] Response Headers:
	I0910 19:36:08.895027     716 round_trippers.go:580]     Audit-Id: a4d80d2e-c0a0-4bc7-8c43-3d7b4277f327
	I0910 19:36:08.895027     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:08.895027     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:08.895027     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:08.895027     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:08.895027     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:09 GMT
	I0910 19:36:08.897290     716 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"405","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57928 chars]
	I0910 19:36:08.899983     716 system_pods.go:59] 8 kube-system pods found
	I0910 19:36:08.899983     716 system_pods.go:61] "coredns-6f6b679f8f-srtv8" [76dd899a-75f4-497d-a6a9-6b263f3a379d] Running
	I0910 19:36:08.899983     716 system_pods.go:61] "etcd-multinode-629100" [9df7e62a-50ad-4d7e-97eb-e9c494a0892b] Running
	I0910 19:36:08.899983     716 system_pods.go:61] "kindnet-lj2v2" [6643175a-32c5-4461-a441-08b9ac9ba98a] Running
	I0910 19:36:08.899983     716 system_pods.go:61] "kube-apiserver-multinode-629100" [8dcf96e7-f1c8-4b97-b0a8-e4b79bd7566c] Running
	I0910 19:36:08.899983     716 system_pods.go:61] "kube-controller-manager-multinode-629100" [60adcb0c-808a-477c-9432-83dc8f96d6c0] Running
	I0910 19:36:08.899983     716 system_pods.go:61] "kube-proxy-wqf2d" [27e7846e-e506-48e0-96b9-351b3ca91703] Running
	I0910 19:36:08.899983     716 system_pods.go:61] "kube-scheduler-multinode-629100" [fe93d1a3-98c4-44c9-b1fb-9d90a97b97c7] Running
	I0910 19:36:08.899983     716 system_pods.go:61] "storage-provisioner" [511e6e52-fc2a-4562-9903-42ee1f2e0a2d] Running
	I0910 19:36:08.899983     716 system_pods.go:74] duration metric: took 162.1478ms to wait for pod list to return data ...
	I0910 19:36:08.899983     716 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:36:09.088934     716 request.go:632] Waited for 188.6904ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.210.71:8443/api/v1/namespaces/default/serviceaccounts
	I0910 19:36:09.088934     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/namespaces/default/serviceaccounts
	I0910 19:36:09.088934     716 round_trippers.go:469] Request Headers:
	I0910 19:36:09.088934     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:09.088934     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:09.091589     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:36:09.091589     716 round_trippers.go:577] Response Headers:
	I0910 19:36:09.091589     716 round_trippers.go:580]     Audit-Id: 7a1e4d21-0c59-4188-bf73-edff17b3a349
	I0910 19:36:09.091589     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:09.091589     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:09.091589     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:09.091589     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:09.091589     716 round_trippers.go:580]     Content-Length: 261
	I0910 19:36:09.091589     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:09 GMT
	I0910 19:36:09.091589     716 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"5ec55b5c-25b1-4463-9e6c-90f1cae6d2f9","resourceVersion":"302","creationTimestamp":"2024-09-10T19:35:46Z"}}]}
	I0910 19:36:09.092762     716 default_sa.go:45] found service account: "default"
	I0910 19:36:09.092856     716 default_sa.go:55] duration metric: took 192.8604ms for default service account to be created ...
	I0910 19:36:09.092856     716 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 19:36:09.292736     716 request.go:632] Waited for 199.521ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods
	I0910 19:36:09.292848     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods
	I0910 19:36:09.292848     716 round_trippers.go:469] Request Headers:
	I0910 19:36:09.293066     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:09.293066     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:09.296811     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:36:09.296811     716 round_trippers.go:577] Response Headers:
	I0910 19:36:09.296811     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:09.296811     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:09.296811     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:09.296811     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:09.296811     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:09 GMT
	I0910 19:36:09.296811     716 round_trippers.go:580]     Audit-Id: 9368af37-875a-4da7-b8a4-b72b8e4f5854
	I0910 19:36:09.298326     716 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"405","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57928 chars]
	I0910 19:36:09.303769     716 system_pods.go:86] 8 kube-system pods found
	I0910 19:36:09.303886     716 system_pods.go:89] "coredns-6f6b679f8f-srtv8" [76dd899a-75f4-497d-a6a9-6b263f3a379d] Running
	I0910 19:36:09.303945     716 system_pods.go:89] "etcd-multinode-629100" [9df7e62a-50ad-4d7e-97eb-e9c494a0892b] Running
	I0910 19:36:09.303996     716 system_pods.go:89] "kindnet-lj2v2" [6643175a-32c5-4461-a441-08b9ac9ba98a] Running
	I0910 19:36:09.303996     716 system_pods.go:89] "kube-apiserver-multinode-629100" [8dcf96e7-f1c8-4b97-b0a8-e4b79bd7566c] Running
	I0910 19:36:09.303996     716 system_pods.go:89] "kube-controller-manager-multinode-629100" [60adcb0c-808a-477c-9432-83dc8f96d6c0] Running
	I0910 19:36:09.303996     716 system_pods.go:89] "kube-proxy-wqf2d" [27e7846e-e506-48e0-96b9-351b3ca91703] Running
	I0910 19:36:09.303996     716 system_pods.go:89] "kube-scheduler-multinode-629100" [fe93d1a3-98c4-44c9-b1fb-9d90a97b97c7] Running
	I0910 19:36:09.303996     716 system_pods.go:89] "storage-provisioner" [511e6e52-fc2a-4562-9903-42ee1f2e0a2d] Running
	I0910 19:36:09.304730     716 system_pods.go:126] duration metric: took 211.8277ms to wait for k8s-apps to be running ...
	I0910 19:36:09.304730     716 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:36:09.313736     716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:36:09.335941     716 system_svc.go:56] duration metric: took 31.2089ms WaitForService to wait for kubelet
	I0910 19:36:09.335941     716 kubeadm.go:582] duration metric: took 23.4660775s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:36:09.335941     716 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:36:09.495735     716 request.go:632] Waited for 159.5789ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.210.71:8443/api/v1/nodes
	I0910 19:36:09.495816     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes
	I0910 19:36:09.495816     716 round_trippers.go:469] Request Headers:
	I0910 19:36:09.495816     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:36:09.495816     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:36:09.499395     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:36:09.499395     716 round_trippers.go:577] Response Headers:
	I0910 19:36:09.499395     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:36:09.499395     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:36:09.499395     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:36:09.499395     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:36:09.499395     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:36:09 GMT
	I0910 19:36:09.499395     716 round_trippers.go:580]     Audit-Id: dfd0eb64-5c10-46c1-902c-2603426757bf
	I0910 19:36:09.499692     716 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"391","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4828 chars]
	I0910 19:36:09.500343     716 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:36:09.500450     716 node_conditions.go:123] node cpu capacity is 2
	I0910 19:36:09.500450     716 node_conditions.go:105] duration metric: took 164.4985ms to run NodePressure ...
	I0910 19:36:09.500450     716 start.go:241] waiting for startup goroutines ...
	I0910 19:36:09.500450     716 start.go:246] waiting for cluster config update ...
	I0910 19:36:09.500562     716 start.go:255] writing updated cluster config ...
	I0910 19:36:09.505259     716 out.go:201] 
	I0910 19:36:09.509181     716 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:36:09.519717     716 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:36:09.519803     716 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json ...
	I0910 19:36:09.525671     716 out.go:177] * Starting "multinode-629100-m02" worker node in "multinode-629100" cluster
	I0910 19:36:09.527891     716 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 19:36:09.527891     716 cache.go:56] Caching tarball of preloaded images
	I0910 19:36:09.528262     716 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0910 19:36:09.528262     716 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 19:36:09.528628     716 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json ...
	I0910 19:36:09.534685     716 start.go:360] acquireMachinesLock for multinode-629100-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 19:36:09.535627     716 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-629100-m02"
	I0910 19:36:09.535760     716 start.go:93] Provisioning new machine with config: &{Name:multinode-629100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.210.71 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0910 19:36:09.535760     716 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0910 19:36:09.539017     716 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 19:36:09.539017     716 start.go:159] libmachine.API.Create for "multinode-629100" (driver="hyperv")
	I0910 19:36:09.539017     716 client.go:168] LocalClient.Create starting
	I0910 19:36:09.539382     716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0910 19:36:09.539382     716 main.go:141] libmachine: Decoding PEM data...
	I0910 19:36:09.539382     716 main.go:141] libmachine: Parsing certificate...
	I0910 19:36:09.539972     716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0910 19:36:09.540179     716 main.go:141] libmachine: Decoding PEM data...
	I0910 19:36:09.540233     716 main.go:141] libmachine: Parsing certificate...
	I0910 19:36:09.540283     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0910 19:36:11.329279     716 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0910 19:36:11.329279     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:36:11.330245     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0910 19:36:12.948959     716 main.go:141] libmachine: [stdout =====>] : False
	
	I0910 19:36:12.949644     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:36:12.949644     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0910 19:36:14.334207     716 main.go:141] libmachine: [stdout =====>] : True
	
	I0910 19:36:14.334716     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:36:14.334814     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0910 19:36:17.540695     716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0910 19:36:17.540695     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:36:17.542444     716 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 19:36:17.952983     716 main.go:141] libmachine: Creating SSH key...
	I0910 19:36:18.228363     716 main.go:141] libmachine: Creating VM...
	I0910 19:36:18.228874     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0910 19:36:20.849070     716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0910 19:36:20.849070     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:36:20.849170     716 main.go:141] libmachine: Using switch "Default Switch"
	I0910 19:36:20.849170     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0910 19:36:22.484747     716 main.go:141] libmachine: [stdout =====>] : True
	
	I0910 19:36:22.485606     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:36:22.485606     716 main.go:141] libmachine: Creating VHD
	I0910 19:36:22.485606     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0910 19:36:25.912302     716 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : EB2F55A3-816D-4031-9196-875A91AF55A2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0910 19:36:25.912378     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:36:25.912378     716 main.go:141] libmachine: Writing magic tar header
	I0910 19:36:25.912448     716 main.go:141] libmachine: Writing SSH key tar header
	I0910 19:36:25.922674     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0910 19:36:28.879953     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:36:28.880970     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:36:28.881031     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\disk.vhd' -SizeBytes 20000MB
	I0910 19:36:31.233104     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:36:31.233888     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:36:31.233888     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-629100-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0910 19:36:34.412018     716 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-629100-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0910 19:36:34.413084     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:36:34.413153     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-629100-m02 -DynamicMemoryEnabled $false
	I0910 19:36:36.376448     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:36:36.376448     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:36:36.376522     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-629100-m02 -Count 2
	I0910 19:36:38.282610     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:36:38.282983     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:36:38.283056     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-629100-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\boot2docker.iso'
	I0910 19:36:40.554720     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:36:40.554720     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:36:40.555794     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-629100-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\disk.vhd'
	I0910 19:36:42.874415     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:36:42.874415     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:36:42.874415     716 main.go:141] libmachine: Starting VM...
	I0910 19:36:42.875490     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-629100-m02
	I0910 19:36:45.695265     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:36:45.695708     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:36:45.695755     716 main.go:141] libmachine: Waiting for host to start...
	I0910 19:36:45.695835     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:36:47.762326     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:36:47.763141     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:36:47.763215     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:36:49.977409     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:36:49.977409     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:36:50.991225     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:36:52.930231     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:36:52.930231     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:36:52.930231     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:36:55.129338     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:36:55.129338     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:36:56.133078     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:36:58.058471     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:36:58.058471     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:36:58.059393     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:37:00.293553     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:37:00.293553     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:01.298843     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:37:03.281830     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:37:03.282404     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:03.282468     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:37:05.542435     716 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:37:05.542535     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:06.550387     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:37:08.540701     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:37:08.540701     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:08.540701     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:37:10.892397     716 main.go:141] libmachine: [stdout =====>] : 172.31.209.0
	
	I0910 19:37:10.892397     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:10.892397     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:37:12.820539     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:37:12.820539     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:12.820539     716 machine.go:93] provisionDockerMachine start ...
	I0910 19:37:12.820539     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:37:14.743135     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:37:14.743135     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:14.743884     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:37:17.034511     716 main.go:141] libmachine: [stdout =====>] : 172.31.209.0
	
	I0910 19:37:17.034511     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:17.038601     716 main.go:141] libmachine: Using SSH client type: native
	I0910 19:37:17.050517     716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.209.0 22 <nil> <nil>}
	I0910 19:37:17.050517     716 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 19:37:17.182485     716 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 19:37:17.182561     716 buildroot.go:166] provisioning hostname "multinode-629100-m02"
	I0910 19:37:17.182627     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:37:19.029401     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:37:19.029401     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:19.029401     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:37:21.285880     716 main.go:141] libmachine: [stdout =====>] : 172.31.209.0
	
	I0910 19:37:21.285880     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:21.289730     716 main.go:141] libmachine: Using SSH client type: native
	I0910 19:37:21.290152     716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.209.0 22 <nil> <nil>}
	I0910 19:37:21.290152     716 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-629100-m02 && echo "multinode-629100-m02" | sudo tee /etc/hostname
	I0910 19:37:21.454793     716 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-629100-m02
	
	I0910 19:37:21.454878     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:37:23.328269     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:37:23.328269     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:23.329264     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:37:25.574633     716 main.go:141] libmachine: [stdout =====>] : 172.31.209.0
	
	I0910 19:37:25.574633     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:25.578694     716 main.go:141] libmachine: Using SSH client type: native
	I0910 19:37:25.579486     716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.209.0 22 <nil> <nil>}
	I0910 19:37:25.579486     716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-629100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-629100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-629100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 19:37:25.726725     716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 19:37:25.726725     716 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0910 19:37:25.726725     716 buildroot.go:174] setting up certificates
	I0910 19:37:25.726725     716 provision.go:84] configureAuth start
	I0910 19:37:25.726725     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:37:27.639642     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:37:27.639642     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:27.640678     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:37:29.893174     716 main.go:141] libmachine: [stdout =====>] : 172.31.209.0
	
	I0910 19:37:29.893966     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:29.893998     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:37:31.778940     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:37:31.778940     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:31.779025     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:37:33.995827     716 main.go:141] libmachine: [stdout =====>] : 172.31.209.0
	
	I0910 19:37:33.995827     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:33.995827     716 provision.go:143] copyHostCerts
	I0910 19:37:33.996084     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0910 19:37:33.996084     716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0910 19:37:33.996084     716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0910 19:37:33.996610     716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0910 19:37:33.997520     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0910 19:37:33.997715     716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0910 19:37:33.997750     716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0910 19:37:33.997967     716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0910 19:37:33.998854     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0910 19:37:33.999050     716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0910 19:37:33.999050     716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0910 19:37:33.999437     716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0910 19:37:33.999889     716 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-629100-m02 san=[127.0.0.1 172.31.209.0 localhost minikube multinode-629100-m02]
	I0910 19:37:34.212040     716 provision.go:177] copyRemoteCerts
	I0910 19:37:34.221181     716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 19:37:34.221181     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:37:36.096504     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:37:36.096504     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:36.096795     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:37:38.346375     716 main.go:141] libmachine: [stdout =====>] : 172.31.209.0
	
	I0910 19:37:38.346375     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:38.346752     716 sshutil.go:53] new ssh client: &{IP:172.31.209.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\id_rsa Username:docker}
	I0910 19:37:38.454109     716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2326171s)
	I0910 19:37:38.454109     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0910 19:37:38.454109     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 19:37:38.494702     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0910 19:37:38.495551     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 19:37:38.540500     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0910 19:37:38.540856     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0910 19:37:38.581455     716 provision.go:87] duration metric: took 12.8538755s to configureAuth
	I0910 19:37:38.581455     716 buildroot.go:189] setting minikube options for container-runtime
	I0910 19:37:38.582614     716 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:37:38.582614     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:37:40.461232     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:37:40.461232     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:40.461232     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:37:42.720672     716 main.go:141] libmachine: [stdout =====>] : 172.31.209.0
	
	I0910 19:37:42.720672     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:42.729215     716 main.go:141] libmachine: Using SSH client type: native
	I0910 19:37:42.729557     716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.209.0 22 <nil> <nil>}
	I0910 19:37:42.729557     716 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 19:37:42.863883     716 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 19:37:42.863973     716 buildroot.go:70] root file system type: tmpfs
	I0910 19:37:42.864199     716 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 19:37:42.864199     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:37:44.777483     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:37:44.778274     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:44.778274     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:37:47.068963     716 main.go:141] libmachine: [stdout =====>] : 172.31.209.0
	
	I0910 19:37:47.068963     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:47.072683     716 main.go:141] libmachine: Using SSH client type: native
	I0910 19:37:47.073073     716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.209.0 22 <nil> <nil>}
	I0910 19:37:47.073163     716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.31.210.71"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 19:37:47.223516     716 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.31.210.71
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 19:37:47.223516     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:37:49.109327     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:37:49.109327     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:49.109394     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:37:51.359333     716 main.go:141] libmachine: [stdout =====>] : 172.31.209.0
	
	I0910 19:37:51.359441     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:51.365791     716 main.go:141] libmachine: Using SSH client type: native
	I0910 19:37:51.366538     716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.209.0 22 <nil> <nil>}
	I0910 19:37:51.366617     716 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 19:37:53.546407     716 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 19:37:53.546407     716 machine.go:96] duration metric: took 40.723165s to provisionDockerMachine
	I0910 19:37:53.546407     716 client.go:171] duration metric: took 1m44.0004697s to LocalClient.Create
	I0910 19:37:53.546407     716 start.go:167] duration metric: took 1m44.0004697s to libmachine.API.Create "multinode-629100"
	I0910 19:37:53.546407     716 start.go:293] postStartSetup for "multinode-629100-m02" (driver="hyperv")
	I0910 19:37:53.546407     716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 19:37:53.555588     716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 19:37:53.555588     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:37:55.363731     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:37:55.363776     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:55.363776     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:37:57.554070     716 main.go:141] libmachine: [stdout =====>] : 172.31.209.0
	
	I0910 19:37:57.555117     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:57.555560     716 sshutil.go:53] new ssh client: &{IP:172.31.209.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\id_rsa Username:docker}
	I0910 19:37:57.654453     716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.0985625s)
	I0910 19:37:57.662571     716 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 19:37:57.669677     716 command_runner.go:130] > NAME=Buildroot
	I0910 19:37:57.669738     716 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0910 19:37:57.669738     716 command_runner.go:130] > ID=buildroot
	I0910 19:37:57.669738     716 command_runner.go:130] > VERSION_ID=2023.02.9
	I0910 19:37:57.669768     716 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0910 19:37:57.669768     716 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 19:37:57.669768     716 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0910 19:37:57.669768     716 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0910 19:37:57.670770     716 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> 47242.pem in /etc/ssl/certs
	I0910 19:37:57.670770     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /etc/ssl/certs/47242.pem
	I0910 19:37:57.678576     716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 19:37:57.695787     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /etc/ssl/certs/47242.pem (1708 bytes)
	I0910 19:37:57.741484     716 start.go:296] duration metric: took 4.1947988s for postStartSetup
	I0910 19:37:57.743630     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:37:59.592473     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:37:59.592473     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:37:59.593157     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:38:01.854618     716 main.go:141] libmachine: [stdout =====>] : 172.31.209.0
	
	I0910 19:38:01.854618     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:38:01.854618     716 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json ...
	I0910 19:38:01.859185     716 start.go:128] duration metric: took 1m52.3159168s to createHost
	I0910 19:38:01.859270     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:38:03.722416     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:38:03.722416     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:38:03.722705     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:38:05.987189     716 main.go:141] libmachine: [stdout =====>] : 172.31.209.0
	
	I0910 19:38:05.987261     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:38:05.992095     716 main.go:141] libmachine: Using SSH client type: native
	I0910 19:38:05.992469     716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.209.0 22 <nil> <nil>}
	I0910 19:38:05.992469     716 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 19:38:06.132425     716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725997086.353362164
	
	I0910 19:38:06.132425     716 fix.go:216] guest clock: 1725997086.353362164
	I0910 19:38:06.132425     716 fix.go:229] Guest: 2024-09-10 19:38:06.353362164 +0000 UTC Remote: 2024-09-10 19:38:01.8591857 +0000 UTC m=+310.497995701 (delta=4.494176464s)
	I0910 19:38:06.132425     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:38:07.988506     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:38:07.988506     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:38:07.988686     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:38:10.207220     716 main.go:141] libmachine: [stdout =====>] : 172.31.209.0
	
	I0910 19:38:10.207525     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:38:10.211182     716 main.go:141] libmachine: Using SSH client type: native
	I0910 19:38:10.211316     716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.209.0 22 <nil> <nil>}
	I0910 19:38:10.211316     716 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1725997086
	I0910 19:38:10.361400     716 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Sep 10 19:38:06 UTC 2024
	
	I0910 19:38:10.361400     716 fix.go:236] clock set: Tue Sep 10 19:38:06 UTC 2024
	 (err=<nil>)
	I0910 19:38:10.361569     716 start.go:83] releasing machines lock for "multinode-629100-m02", held for 2m0.8178709s
	I0910 19:38:10.361805     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:38:12.210999     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:38:12.211943     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:38:12.212024     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:38:14.427503     716 main.go:141] libmachine: [stdout =====>] : 172.31.209.0
	
	I0910 19:38:14.427503     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:38:14.430975     716 out.go:177] * Found network options:
	I0910 19:38:14.433876     716 out.go:177]   - NO_PROXY=172.31.210.71
	W0910 19:38:14.436126     716 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 19:38:14.439074     716 out.go:177]   - NO_PROXY=172.31.210.71
	W0910 19:38:14.441687     716 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 19:38:14.442872     716 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 19:38:14.446624     716 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0910 19:38:14.446769     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:38:14.452261     716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0910 19:38:14.452261     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:38:16.360977     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:38:16.360977     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:38:16.361237     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:38:16.369527     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:38:16.369527     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:38:16.369527     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:38:18.666740     716 main.go:141] libmachine: [stdout =====>] : 172.31.209.0
	
	I0910 19:38:18.666923     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:38:18.666923     716 sshutil.go:53] new ssh client: &{IP:172.31.209.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\id_rsa Username:docker}
	I0910 19:38:18.691354     716 main.go:141] libmachine: [stdout =====>] : 172.31.209.0
	
	I0910 19:38:18.691354     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:38:18.691632     716 sshutil.go:53] new ssh client: &{IP:172.31.209.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\id_rsa Username:docker}
	I0910 19:38:18.765718     716 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0910 19:38:18.766813     716 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.3141659s)
	W0910 19:38:18.766813     716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 19:38:18.775617     716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 19:38:18.776632     716 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0910 19:38:18.776632     716 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.3296487s)
	W0910 19:38:18.776632     716 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0910 19:38:18.804108     716 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0910 19:38:18.804344     716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 19:38:18.804369     716 start.go:495] detecting cgroup driver to use...
	I0910 19:38:18.804475     716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 19:38:18.836808     716 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0910 19:38:18.845793     716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 19:38:18.874496     716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 19:38:18.892820     716 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 19:38:18.902087     716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W0910 19:38:18.919090     716 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0910 19:38:18.919149     716 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0910 19:38:18.934558     716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 19:38:18.962066     716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 19:38:18.988054     716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 19:38:19.014973     716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 19:38:19.043674     716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 19:38:19.080143     716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 19:38:19.113574     716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 19:38:19.140541     716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 19:38:19.158542     716 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0910 19:38:19.171871     716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 19:38:19.195681     716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:38:19.367923     716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 19:38:19.396685     716 start.go:495] detecting cgroup driver to use...
	I0910 19:38:19.406299     716 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 19:38:19.427516     716 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0910 19:38:19.427516     716 command_runner.go:130] > [Unit]
	I0910 19:38:19.427611     716 command_runner.go:130] > Description=Docker Application Container Engine
	I0910 19:38:19.427611     716 command_runner.go:130] > Documentation=https://docs.docker.com
	I0910 19:38:19.427611     716 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0910 19:38:19.427611     716 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0910 19:38:19.427611     716 command_runner.go:130] > StartLimitBurst=3
	I0910 19:38:19.427611     716 command_runner.go:130] > StartLimitIntervalSec=60
	I0910 19:38:19.427611     716 command_runner.go:130] > [Service]
	I0910 19:38:19.427611     716 command_runner.go:130] > Type=notify
	I0910 19:38:19.427611     716 command_runner.go:130] > Restart=on-failure
	I0910 19:38:19.427611     716 command_runner.go:130] > Environment=NO_PROXY=172.31.210.71
	I0910 19:38:19.427705     716 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0910 19:38:19.427738     716 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0910 19:38:19.427738     716 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0910 19:38:19.427738     716 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0910 19:38:19.427738     716 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0910 19:38:19.427738     716 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0910 19:38:19.427738     716 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0910 19:38:19.427738     716 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0910 19:38:19.427837     716 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0910 19:38:19.427837     716 command_runner.go:130] > ExecStart=
	I0910 19:38:19.427837     716 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0910 19:38:19.427837     716 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0910 19:38:19.427837     716 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0910 19:38:19.427837     716 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0910 19:38:19.427837     716 command_runner.go:130] > LimitNOFILE=infinity
	I0910 19:38:19.427837     716 command_runner.go:130] > LimitNPROC=infinity
	I0910 19:38:19.427938     716 command_runner.go:130] > LimitCORE=infinity
	I0910 19:38:19.427938     716 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0910 19:38:19.427938     716 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0910 19:38:19.427938     716 command_runner.go:130] > TasksMax=infinity
	I0910 19:38:19.427938     716 command_runner.go:130] > TimeoutStartSec=0
	I0910 19:38:19.427938     716 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0910 19:38:19.427938     716 command_runner.go:130] > Delegate=yes
	I0910 19:38:19.427938     716 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0910 19:38:19.427938     716 command_runner.go:130] > KillMode=process
	I0910 19:38:19.428041     716 command_runner.go:130] > [Install]
	I0910 19:38:19.428041     716 command_runner.go:130] > WantedBy=multi-user.target
	I0910 19:38:19.437527     716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:38:19.467349     716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 19:38:19.509309     716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:38:19.539679     716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 19:38:19.569544     716 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 19:38:19.617004     716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 19:38:19.639748     716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 19:38:19.668756     716 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0910 19:38:19.677463     716 ssh_runner.go:195] Run: which cri-dockerd
	I0910 19:38:19.683666     716 command_runner.go:130] > /usr/bin/cri-dockerd
	I0910 19:38:19.691621     716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 19:38:19.710059     716 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 19:38:19.747541     716 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 19:38:19.921849     716 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 19:38:20.091342     716 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 19:38:20.091475     716 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 19:38:20.131700     716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:38:20.309152     716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 19:38:22.847610     716 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5382894s)
	I0910 19:38:22.858753     716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 19:38:22.890889     716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 19:38:22.922190     716 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 19:38:23.107364     716 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 19:38:23.290779     716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:38:23.471721     716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 19:38:23.507577     716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 19:38:23.538102     716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:38:23.709076     716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 19:38:23.806978     716 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 19:38:23.815838     716 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 19:38:23.823601     716 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0910 19:38:23.823769     716 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0910 19:38:23.823769     716 command_runner.go:130] > Device: 0,22	Inode: 896         Links: 1
	I0910 19:38:23.823769     716 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0910 19:38:23.823769     716 command_runner.go:130] > Access: 2024-09-10 19:38:23.955433106 +0000
	I0910 19:38:23.823769     716 command_runner.go:130] > Modify: 2024-09-10 19:38:23.955433106 +0000
	I0910 19:38:23.823850     716 command_runner.go:130] > Change: 2024-09-10 19:38:23.958433288 +0000
	I0910 19:38:23.823873     716 command_runner.go:130] >  Birth: -
	I0910 19:38:23.823873     716 start.go:563] Will wait 60s for crictl version
	I0910 19:38:23.832677     716 ssh_runner.go:195] Run: which crictl
	I0910 19:38:23.838755     716 command_runner.go:130] > /usr/bin/crictl
	I0910 19:38:23.848365     716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 19:38:23.900183     716 command_runner.go:130] > Version:  0.1.0
	I0910 19:38:23.900183     716 command_runner.go:130] > RuntimeName:  docker
	I0910 19:38:23.900183     716 command_runner.go:130] > RuntimeVersion:  27.2.0
	I0910 19:38:23.900183     716 command_runner.go:130] > RuntimeApiVersion:  v1
	I0910 19:38:23.900183     716 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0910 19:38:23.907899     716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 19:38:23.938414     716 command_runner.go:130] > 27.2.0
	I0910 19:38:23.950628     716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 19:38:23.981536     716 command_runner.go:130] > 27.2.0
	I0910 19:38:23.992138     716 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0910 19:38:23.997458     716 out.go:177]   - env NO_PROXY=172.31.210.71
	I0910 19:38:24.002209     716 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0910 19:38:24.006360     716 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0910 19:38:24.006360     716 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0910 19:38:24.006360     716 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0910 19:38:24.006360     716 ip.go:211] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:36:e6 Flags:up|broadcast|multicast|running}
	I0910 19:38:24.008832     716 ip.go:214] interface addr: fe80::bc85:cd58:cadb:2533/64
	I0910 19:38:24.008832     716 ip.go:214] interface addr: 172.31.208.1/20
	I0910 19:38:24.019181     716 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0910 19:38:24.025764     716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:38:24.047099     716 mustload.go:65] Loading cluster: multinode-629100
	I0910 19:38:24.047239     716 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:38:24.048210     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:38:25.959501     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:38:25.959501     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:38:25.959501     716 host.go:66] Checking if "multinode-629100" exists ...
	I0910 19:38:25.960629     716 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100 for IP: 172.31.209.0
	I0910 19:38:25.960629     716 certs.go:194] generating shared ca certs ...
	I0910 19:38:25.960718     716 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:38:25.961249     716 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0910 19:38:25.961542     716 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0910 19:38:25.961715     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 19:38:25.961865     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0910 19:38:25.961970     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 19:38:25.962066     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 19:38:25.962490     716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem (1338 bytes)
	W0910 19:38:25.962709     716 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724_empty.pem, impossibly tiny 0 bytes
	I0910 19:38:25.962788     716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0910 19:38:25.963028     716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0910 19:38:25.963275     716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0910 19:38:25.963491     716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0910 19:38:25.963853     716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem (1708 bytes)
	I0910 19:38:25.963976     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /usr/share/ca-certificates/47242.pem
	I0910 19:38:25.963976     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:38:25.963976     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem -> /usr/share/ca-certificates/4724.pem
	I0910 19:38:25.963976     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 19:38:26.011563     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0910 19:38:26.058460     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 19:38:26.100346     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 19:38:26.141637     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /usr/share/ca-certificates/47242.pem (1708 bytes)
	I0910 19:38:26.190508     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 19:38:26.235197     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem --> /usr/share/ca-certificates/4724.pem (1338 bytes)
	I0910 19:38:26.293975     716 ssh_runner.go:195] Run: openssl version
	I0910 19:38:26.301955     716 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0910 19:38:26.311384     716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47242.pem && ln -fs /usr/share/ca-certificates/47242.pem /etc/ssl/certs/47242.pem"
	I0910 19:38:26.338634     716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47242.pem
	I0910 19:38:26.344565     716 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 19:38:26.345760     716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 19:38:26.354335     716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47242.pem
	I0910 19:38:26.363120     716 command_runner.go:130] > 3ec20f2e
	I0910 19:38:26.371609     716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47242.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 19:38:26.399557     716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 19:38:26.427507     716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:38:26.434835     716 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:38:26.435043     716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:38:26.443670     716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:38:26.451949     716 command_runner.go:130] > b5213941
	I0910 19:38:26.464455     716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 19:38:26.502435     716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4724.pem && ln -fs /usr/share/ca-certificates/4724.pem /etc/ssl/certs/4724.pem"
	I0910 19:38:26.531713     716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4724.pem
	I0910 19:38:26.538320     716 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 19:38:26.538320     716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 19:38:26.547160     716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4724.pem
	I0910 19:38:26.555540     716 command_runner.go:130] > 51391683
	I0910 19:38:26.564650     716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4724.pem /etc/ssl/certs/51391683.0"
	I0910 19:38:26.594140     716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 19:38:26.600073     716 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 19:38:26.600073     716 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 19:38:26.600878     716 kubeadm.go:934] updating node {m02 172.31.209.0 8443 v1.31.0 docker false true} ...
	I0910 19:38:26.600878     716 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-629100-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.209.0
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 19:38:26.608522     716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 19:38:26.625549     716 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	I0910 19:38:26.625940     716 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0910 19:38:26.636461     716 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0910 19:38:26.654215     716 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0910 19:38:26.654215     716 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0910 19:38:26.654215     716 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0910 19:38:26.654215     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0910 19:38:26.654215     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0910 19:38:26.666443     716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:38:26.666443     716 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0910 19:38:26.668596     716 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0910 19:38:26.687084     716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0910 19:38:26.687084     716 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0910 19:38:26.687209     716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0910 19:38:26.687263     716 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0910 19:38:26.687263     716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0910 19:38:26.687263     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0910 19:38:26.687263     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0910 19:38:26.696320     716 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0910 19:38:26.759926     716 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0910 19:38:26.767043     716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0910 19:38:26.767307     716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0910 19:38:27.658630     716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0910 19:38:27.677344     716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0910 19:38:27.706452     716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 19:38:27.743760     716 ssh_runner.go:195] Run: grep 172.31.210.71	control-plane.minikube.internal$ /etc/hosts
	I0910 19:38:27.749406     716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.210.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:38:27.782575     716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:38:27.963436     716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:38:27.993396     716 host.go:66] Checking if "multinode-629100" exists ...
	I0910 19:38:27.993580     716 start.go:317] joinCluster: &{Name:multinode-629100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.210.71 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.209.0 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:38:27.994178     716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0910 19:38:27.994271     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:38:29.888649     716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:38:29.888785     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:38:29.888856     716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:38:32.070496     716 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:38:32.070496     716 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:38:32.072199     716 sshutil.go:53] new ssh client: &{IP:172.31.210.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 19:38:32.246919     716 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token s7hblu.5dbbs39mzn7bm0mw --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b 
	I0910 19:38:32.247835     716 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.253376s)
	I0910 19:38:32.247945     716 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.31.209.0 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0910 19:38:32.247945     716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token s7hblu.5dbbs39mzn7bm0mw --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-629100-m02"
	I0910 19:38:32.407989     716 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:38:33.712046     716 command_runner.go:130] > [preflight] Running pre-flight checks
	I0910 19:38:33.712046     716 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0910 19:38:33.712132     716 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0910 19:38:33.712132     716 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:38:33.712132     716 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:38:33.712132     716 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0910 19:38:33.712132     716 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 19:38:33.712132     716 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001061843s
	I0910 19:38:33.712132     716 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0910 19:38:33.712132     716 command_runner.go:130] > This node has joined the cluster:
	I0910 19:38:33.712132     716 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0910 19:38:33.712132     716 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0910 19:38:33.712132     716 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0910 19:38:33.712132     716 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token s7hblu.5dbbs39mzn7bm0mw --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-629100-m02": (1.46409s)
	I0910 19:38:33.712132     716 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0910 19:38:33.894195     716 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0910 19:38:34.070341     716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-629100-m02 minikube.k8s.io/updated_at=2024_09_10T19_38_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=multinode-629100 minikube.k8s.io/primary=false
	I0910 19:38:34.185058     716 command_runner.go:130] > node/multinode-629100-m02 labeled
	I0910 19:38:34.185058     716 start.go:319] duration metric: took 6.1910683s to joinCluster
	I0910 19:38:34.185058     716 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.31.209.0 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0910 19:38:34.185681     716 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:38:34.189159     716 out.go:177] * Verifying Kubernetes components...
	I0910 19:38:34.199575     716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:38:34.386282     716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:38:34.411960     716 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 19:38:34.412612     716 kapi.go:59] client config for multinode-629100: &rest.Config{Host:"https://172.31.210.71:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 19:38:34.413455     716 node_ready.go:35] waiting up to 6m0s for node "multinode-629100-m02" to be "Ready" ...
	I0910 19:38:34.413608     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:34.413608     716 round_trippers.go:469] Request Headers:
	I0910 19:38:34.413608     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:34.413675     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:34.427245     716 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0910 19:38:34.427306     716 round_trippers.go:577] Response Headers:
	I0910 19:38:34.427306     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:34.427306     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:34.427306     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:34.427306     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:34.427306     716 round_trippers.go:580]     Content-Length: 3912
	I0910 19:38:34.427306     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:34 GMT
	I0910 19:38:34.427306     716 round_trippers.go:580]     Audit-Id: c2e19736-4b34-45ea-8f09-c26fe5ae81b5
	I0910 19:38:34.427306     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"558","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2888 chars]
	I0910 19:38:34.924044     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:34.924044     716 round_trippers.go:469] Request Headers:
	I0910 19:38:34.924044     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:34.924044     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:34.927635     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:34.927635     716 round_trippers.go:577] Response Headers:
	I0910 19:38:34.927635     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:35 GMT
	I0910 19:38:34.927635     716 round_trippers.go:580]     Audit-Id: 0e78ee32-87ec-402b-8f9d-2387b4c4244d
	I0910 19:38:34.927635     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:34.928118     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:34.928118     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:34.928118     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:34.928118     716 round_trippers.go:580]     Content-Length: 3912
	I0910 19:38:34.928216     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"558","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2888 chars]
	I0910 19:38:35.425258     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:35.425442     716 round_trippers.go:469] Request Headers:
	I0910 19:38:35.425442     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:35.425442     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:35.430789     716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:38:35.430789     716 round_trippers.go:577] Response Headers:
	I0910 19:38:35.430789     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:35 GMT
	I0910 19:38:35.430789     716 round_trippers.go:580]     Audit-Id: e1cb7923-ab5d-4167-93d1-65411323c142
	I0910 19:38:35.430789     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:35.430789     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:35.430789     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:35.430789     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:35.430789     716 round_trippers.go:580]     Content-Length: 3912
	I0910 19:38:35.430789     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"558","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2888 chars]
	I0910 19:38:35.926732     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:35.926808     716 round_trippers.go:469] Request Headers:
	I0910 19:38:35.926808     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:35.926884     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:35.930385     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:35.930385     716 round_trippers.go:577] Response Headers:
	I0910 19:38:35.930385     716 round_trippers.go:580]     Audit-Id: 7e0e652e-83c4-41e7-9a99-fde30d952d43
	I0910 19:38:35.930385     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:35.930385     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:35.930826     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:35.930826     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:35.930826     716 round_trippers.go:580]     Content-Length: 4021
	I0910 19:38:35.930826     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:36 GMT
	I0910 19:38:35.930939     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"564","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 2997 chars]
	I0910 19:38:36.425048     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:36.425048     716 round_trippers.go:469] Request Headers:
	I0910 19:38:36.425048     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:36.425126     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:36.428764     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:36.428956     716 round_trippers.go:577] Response Headers:
	I0910 19:38:36.428956     716 round_trippers.go:580]     Audit-Id: 371e7dc3-7335-4f4d-84c0-eb6bfa0e4e6b
	I0910 19:38:36.428956     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:36.428956     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:36.428956     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:36.428956     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:36.428956     716 round_trippers.go:580]     Content-Length: 4021
	I0910 19:38:36.428956     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:36 GMT
	I0910 19:38:36.429207     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"564","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 2997 chars]
	I0910 19:38:36.429551     716 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:38:36.924931     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:36.924931     716 round_trippers.go:469] Request Headers:
	I0910 19:38:36.924931     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:36.924931     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:36.935655     716 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0910 19:38:36.936620     716 round_trippers.go:577] Response Headers:
	I0910 19:38:36.936620     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:36.936620     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:36.936620     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:36.936620     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:36.936620     716 round_trippers.go:580]     Content-Length: 4021
	I0910 19:38:36.936620     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:37 GMT
	I0910 19:38:36.936680     716 round_trippers.go:580]     Audit-Id: 9e9719cc-5c87-468d-98c4-6d5f430ea844
	I0910 19:38:36.936832     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"564","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 2997 chars]
	I0910 19:38:37.423087     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:37.423168     716 round_trippers.go:469] Request Headers:
	I0910 19:38:37.423168     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:37.423168     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:37.442898     716 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0910 19:38:37.442898     716 round_trippers.go:577] Response Headers:
	I0910 19:38:37.442898     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:37.442898     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:37.442898     716 round_trippers.go:580]     Content-Length: 4021
	I0910 19:38:37.442898     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:37 GMT
	I0910 19:38:37.442898     716 round_trippers.go:580]     Audit-Id: 1ab872ae-bbc1-44ea-a665-86d2ea56ae51
	I0910 19:38:37.442898     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:37.442898     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:37.443871     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"564","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 2997 chars]
	I0910 19:38:37.920745     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:37.920745     716 round_trippers.go:469] Request Headers:
	I0910 19:38:37.920825     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:37.920825     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:37.924281     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:37.924281     716 round_trippers.go:577] Response Headers:
	I0910 19:38:37.924281     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:37.924281     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:37.924281     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:37.924281     716 round_trippers.go:580]     Content-Length: 4021
	I0910 19:38:37.924281     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:38 GMT
	I0910 19:38:37.924281     716 round_trippers.go:580]     Audit-Id: 0deb1766-4244-4bd2-975d-4c6e1549c6ab
	I0910 19:38:37.924281     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:37.924493     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"564","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 2997 chars]
	I0910 19:38:38.426259     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:38.426352     716 round_trippers.go:469] Request Headers:
	I0910 19:38:38.426352     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:38.426352     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:38.430612     716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:38:38.430612     716 round_trippers.go:577] Response Headers:
	I0910 19:38:38.430612     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:38.430612     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:38.430612     716 round_trippers.go:580]     Content-Length: 4021
	I0910 19:38:38.430612     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:38 GMT
	I0910 19:38:38.430612     716 round_trippers.go:580]     Audit-Id: e3c6ef02-afbd-4b16-a553-20ee914335cb
	I0910 19:38:38.430612     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:38.430612     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:38.431240     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"564","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 2997 chars]
	I0910 19:38:38.431596     716 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:38:38.926878     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:38.926878     716 round_trippers.go:469] Request Headers:
	I0910 19:38:38.926878     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:38.926878     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:38.930503     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:38.930503     716 round_trippers.go:577] Response Headers:
	I0910 19:38:38.930503     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:38.931061     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:38.931061     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:38.931061     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:38.931061     716 round_trippers.go:580]     Content-Length: 4021
	I0910 19:38:38.931061     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:39 GMT
	I0910 19:38:38.931061     716 round_trippers.go:580]     Audit-Id: 73414617-d514-442e-9cf3-d72a15877ed9
	I0910 19:38:38.931204     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"564","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 2997 chars]
	I0910 19:38:39.427862     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:39.427862     716 round_trippers.go:469] Request Headers:
	I0910 19:38:39.427862     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:39.427862     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:39.431765     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:39.431765     716 round_trippers.go:577] Response Headers:
	I0910 19:38:39.431765     716 round_trippers.go:580]     Content-Length: 4021
	I0910 19:38:39.431765     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:39 GMT
	I0910 19:38:39.431765     716 round_trippers.go:580]     Audit-Id: d7d327ec-7d35-4bbe-b434-a7a6d911ab98
	I0910 19:38:39.431765     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:39.431765     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:39.431765     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:39.432045     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:39.432167     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"564","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 2997 chars]
	I0910 19:38:39.918481     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:39.918551     716 round_trippers.go:469] Request Headers:
	I0910 19:38:39.918551     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:39.918551     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:39.925586     716 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:38:39.925586     716 round_trippers.go:577] Response Headers:
	I0910 19:38:39.925586     716 round_trippers.go:580]     Audit-Id: cb92e760-9840-4166-a030-0876b78aa267
	I0910 19:38:39.925586     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:39.925586     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:39.925586     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:39.925586     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:39.925586     716 round_trippers.go:580]     Content-Length: 4021
	I0910 19:38:39.925586     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:40 GMT
	I0910 19:38:39.925586     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"564","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 2997 chars]
	I0910 19:38:40.427064     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:40.427323     716 round_trippers.go:469] Request Headers:
	I0910 19:38:40.427323     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:40.427407     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:40.431283     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:40.431283     716 round_trippers.go:577] Response Headers:
	I0910 19:38:40.431283     716 round_trippers.go:580]     Audit-Id: 4a3d56cd-d5b9-42df-9a5a-91f0566419fa
	I0910 19:38:40.431283     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:40.431283     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:40.431354     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:40.431354     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:40.431354     716 round_trippers.go:580]     Content-Length: 4021
	I0910 19:38:40.431354     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:40 GMT
	I0910 19:38:40.431480     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"564","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 2997 chars]
	I0910 19:38:40.919530     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:40.919530     716 round_trippers.go:469] Request Headers:
	I0910 19:38:40.919530     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:40.919530     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:40.922091     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:38:40.922091     716 round_trippers.go:577] Response Headers:
	I0910 19:38:40.923086     716 round_trippers.go:580]     Audit-Id: b796dde5-7822-4598-b070-bc7168442802
	I0910 19:38:40.923086     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:40.923086     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:40.923086     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:40.923086     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:40.923086     716 round_trippers.go:580]     Content-Length: 4021
	I0910 19:38:40.923086     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:41 GMT
	I0910 19:38:40.923086     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"564","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 2997 chars]
	I0910 19:38:40.923086     716 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:38:41.424105     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:41.424105     716 round_trippers.go:469] Request Headers:
	I0910 19:38:41.424105     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:41.424105     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:41.428765     716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:38:41.429407     716 round_trippers.go:577] Response Headers:
	I0910 19:38:41.429407     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:41.429407     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:41.429407     716 round_trippers.go:580]     Content-Length: 4021
	I0910 19:38:41.429407     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:41 GMT
	I0910 19:38:41.429407     716 round_trippers.go:580]     Audit-Id: 533c2286-4549-4a8c-972a-033de990f869
	I0910 19:38:41.429407     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:41.429407     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:41.429407     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"564","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 2997 chars]
	I0910 19:38:41.928308     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:41.928308     716 round_trippers.go:469] Request Headers:
	I0910 19:38:41.928308     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:41.928308     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:41.932361     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:41.932431     716 round_trippers.go:577] Response Headers:
	I0910 19:38:41.932431     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:41.932431     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:41.932431     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:41.932431     716 round_trippers.go:580]     Content-Length: 4021
	I0910 19:38:41.932431     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:42 GMT
	I0910 19:38:41.932431     716 round_trippers.go:580]     Audit-Id: 974f6ed7-4f07-4235-9f80-afd1fa1b8d8e
	I0910 19:38:41.932431     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:41.932431     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"564","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 2997 chars]
	I0910 19:38:42.422021     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:42.422101     716 round_trippers.go:469] Request Headers:
	I0910 19:38:42.422101     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:42.422135     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:42.424834     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:38:42.424834     716 round_trippers.go:577] Response Headers:
	I0910 19:38:42.424834     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:42 GMT
	I0910 19:38:42.424834     716 round_trippers.go:580]     Audit-Id: 3971ca61-a4c7-47b5-857a-36664c37ff6b
	I0910 19:38:42.424834     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:42.424834     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:42.424834     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:42.424834     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:42.424834     716 round_trippers.go:580]     Content-Length: 4021
	I0910 19:38:42.425730     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"564","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 2997 chars]
	I0910 19:38:42.921098     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:42.921098     716 round_trippers.go:469] Request Headers:
	I0910 19:38:42.921098     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:42.921098     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:42.928913     716 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:38:42.928913     716 round_trippers.go:577] Response Headers:
	I0910 19:38:42.928913     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:42.928913     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:42.928913     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:42.928913     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:42.928913     716 round_trippers.go:580]     Content-Length: 4021
	I0910 19:38:42.928913     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:43 GMT
	I0910 19:38:42.928913     716 round_trippers.go:580]     Audit-Id: b14419c5-37c9-47b9-9016-58bac8bd2ead
	I0910 19:38:42.928913     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"564","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 2997 chars]
	I0910 19:38:42.930174     716 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:38:43.423877     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:43.423877     716 round_trippers.go:469] Request Headers:
	I0910 19:38:43.423877     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:43.423877     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:43.498696     716 round_trippers.go:574] Response Status: 200 OK in 74 milliseconds
	I0910 19:38:43.499401     716 round_trippers.go:577] Response Headers:
	I0910 19:38:43.499401     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:43.499401     716 round_trippers.go:580]     Content-Length: 4021
	I0910 19:38:43.499401     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:43 GMT
	I0910 19:38:43.499401     716 round_trippers.go:580]     Audit-Id: 89a999e2-06e8-4fcd-b3e3-310e18e5cfc2
	I0910 19:38:43.499484     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:43.499484     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:43.499484     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:43.499629     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"564","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 2997 chars]
	I0910 19:38:43.923914     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:43.923914     716 round_trippers.go:469] Request Headers:
	I0910 19:38:43.923914     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:43.923914     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:43.927101     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:43.927101     716 round_trippers.go:577] Response Headers:
	I0910 19:38:43.927101     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:43.927101     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:43.927101     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:44 GMT
	I0910 19:38:43.927101     716 round_trippers.go:580]     Audit-Id: e9fcbe9b-2dc6-4caa-805d-2f1b2978221f
	I0910 19:38:43.927101     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:43.927101     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:43.927101     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:44.428387     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:44.428387     716 round_trippers.go:469] Request Headers:
	I0910 19:38:44.428387     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:44.428387     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:44.431601     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:44.431601     716 round_trippers.go:577] Response Headers:
	I0910 19:38:44.431601     716 round_trippers.go:580]     Audit-Id: d880ad38-eb20-4ed4-aae0-1607ab0e4618
	I0910 19:38:44.431601     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:44.431601     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:44.431601     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:44.431601     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:44.431601     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:44 GMT
	I0910 19:38:44.431601     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:44.928898     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:44.929018     716 round_trippers.go:469] Request Headers:
	I0910 19:38:44.929018     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:44.929018     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:44.936567     716 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:38:44.936567     716 round_trippers.go:577] Response Headers:
	I0910 19:38:44.936567     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:45 GMT
	I0910 19:38:44.936567     716 round_trippers.go:580]     Audit-Id: f636cc42-ae44-436b-bef5-275cd0835aa0
	I0910 19:38:44.936567     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:44.936567     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:44.936567     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:44.936567     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:44.937322     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:44.937748     716 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:38:45.418566     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:45.418626     716 round_trippers.go:469] Request Headers:
	I0910 19:38:45.418676     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:45.418676     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:45.421867     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:45.421867     716 round_trippers.go:577] Response Headers:
	I0910 19:38:45.421867     716 round_trippers.go:580]     Audit-Id: 6ed5002e-6158-4a92-8815-4485c8cfb8c7
	I0910 19:38:45.421867     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:45.421867     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:45.421867     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:45.421867     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:45.421867     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:45 GMT
	I0910 19:38:45.422370     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:45.924451     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:45.924509     716 round_trippers.go:469] Request Headers:
	I0910 19:38:45.924509     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:45.924509     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:45.927892     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:45.927892     716 round_trippers.go:577] Response Headers:
	I0910 19:38:45.927892     716 round_trippers.go:580]     Audit-Id: e4df5912-7111-4823-8059-2dd7e950deee
	I0910 19:38:45.927892     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:45.928675     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:45.928675     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:45.928675     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:45.928675     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:46 GMT
	I0910 19:38:45.928858     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:46.428872     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:46.428872     716 round_trippers.go:469] Request Headers:
	I0910 19:38:46.428872     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:46.428998     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:46.438664     716 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0910 19:38:46.438664     716 round_trippers.go:577] Response Headers:
	I0910 19:38:46.438664     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:46.438664     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:46.438664     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:46.438664     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:46 GMT
	I0910 19:38:46.438664     716 round_trippers.go:580]     Audit-Id: c8bc62d4-55c3-48ac-beb8-5feb4cf8cbcf
	I0910 19:38:46.438664     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:46.438664     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:46.918236     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:46.918236     716 round_trippers.go:469] Request Headers:
	I0910 19:38:46.918236     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:46.918236     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:46.921797     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:46.922441     716 round_trippers.go:577] Response Headers:
	I0910 19:38:46.922441     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:46.922441     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:46.922441     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:46.922441     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:47 GMT
	I0910 19:38:46.922441     716 round_trippers.go:580]     Audit-Id: 11ac9b90-3209-47bc-b52b-b6991f5f5690
	I0910 19:38:46.922441     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:46.922724     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:47.424601     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:47.424865     716 round_trippers.go:469] Request Headers:
	I0910 19:38:47.424865     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:47.424865     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:47.430197     716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:38:47.431201     716 round_trippers.go:577] Response Headers:
	I0910 19:38:47.431201     716 round_trippers.go:580]     Audit-Id: 6b1cb7c7-3191-482a-8077-1f265af4c871
	I0910 19:38:47.431201     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:47.431201     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:47.431201     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:47.431201     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:47.431201     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:47 GMT
	I0910 19:38:47.431310     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:47.432184     716 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:38:47.923306     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:47.923306     716 round_trippers.go:469] Request Headers:
	I0910 19:38:47.923306     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:47.923306     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:47.927629     716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:38:47.927684     716 round_trippers.go:577] Response Headers:
	I0910 19:38:47.927684     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:47.927684     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:47.927740     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:47.927740     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:47.927858     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:48 GMT
	I0910 19:38:47.927969     716 round_trippers.go:580]     Audit-Id: 32799a6b-e171-4ed6-8918-f4a63ab18eca
	I0910 19:38:47.928306     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:48.429138     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:48.429196     716 round_trippers.go:469] Request Headers:
	I0910 19:38:48.429196     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:48.429196     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:48.431962     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:38:48.432312     716 round_trippers.go:577] Response Headers:
	I0910 19:38:48.432380     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:48.432380     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:48 GMT
	I0910 19:38:48.432451     716 round_trippers.go:580]     Audit-Id: 0ef006d3-4e9c-4e2a-8c8f-13793f2e4940
	I0910 19:38:48.432513     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:48.432513     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:48.432513     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:48.432839     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:48.919206     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:48.919579     716 round_trippers.go:469] Request Headers:
	I0910 19:38:48.919579     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:48.919579     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:49.199422     716 round_trippers.go:574] Response Status: 200 OK in 279 milliseconds
	I0910 19:38:49.199422     716 round_trippers.go:577] Response Headers:
	I0910 19:38:49.199422     716 round_trippers.go:580]     Audit-Id: 1ab6a378-022e-44a3-8e32-dedad538fdfb
	I0910 19:38:49.199422     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:49.199422     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:49.199422     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:49.199422     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:49.199422     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:49 GMT
	I0910 19:38:49.199422     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:49.416300     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:49.416737     716 round_trippers.go:469] Request Headers:
	I0910 19:38:49.416737     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:49.416737     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:49.444466     716 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0910 19:38:49.445463     716 round_trippers.go:577] Response Headers:
	I0910 19:38:49.445463     716 round_trippers.go:580]     Audit-Id: a0f20e44-03f4-43fe-91e6-a69f819b0b2d
	I0910 19:38:49.445490     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:49.445490     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:49.445490     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:49.445490     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:49.445490     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:49 GMT
	I0910 19:38:49.445616     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:49.446017     716 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:38:49.917074     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:49.917242     716 round_trippers.go:469] Request Headers:
	I0910 19:38:49.917242     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:49.917242     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:49.922532     716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:38:49.922532     716 round_trippers.go:577] Response Headers:
	I0910 19:38:49.923070     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:49.923070     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:49.923151     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:50 GMT
	I0910 19:38:49.923151     716 round_trippers.go:580]     Audit-Id: 14b0a66d-0f43-4387-b334-fdbcd4b62e1d
	I0910 19:38:49.923151     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:49.923151     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:49.923234     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:50.418489     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:50.418489     716 round_trippers.go:469] Request Headers:
	I0910 19:38:50.418489     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:50.418489     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:50.422344     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:50.422344     716 round_trippers.go:577] Response Headers:
	I0910 19:38:50.422344     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:50.422344     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:50.422344     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:50.422344     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:50.422344     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:50 GMT
	I0910 19:38:50.422344     716 round_trippers.go:580]     Audit-Id: e2b4ae3b-4f27-43fb-b7ce-953c31949684
	I0910 19:38:50.422344     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:50.921164     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:50.921233     716 round_trippers.go:469] Request Headers:
	I0910 19:38:50.921233     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:50.921233     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:50.925780     716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:38:50.925780     716 round_trippers.go:577] Response Headers:
	I0910 19:38:50.925780     716 round_trippers.go:580]     Audit-Id: 1f36acd9-f5a2-433b-a4bb-49cd3118a230
	I0910 19:38:50.925780     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:50.925780     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:50.925780     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:50.925780     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:50.925780     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:51 GMT
	I0910 19:38:50.926431     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:51.421606     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:51.421839     716 round_trippers.go:469] Request Headers:
	I0910 19:38:51.421839     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:51.421839     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:51.424273     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:38:51.424273     716 round_trippers.go:577] Response Headers:
	I0910 19:38:51.424273     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:51 GMT
	I0910 19:38:51.424273     716 round_trippers.go:580]     Audit-Id: bab28cc6-65f6-486a-a8f4-c298bec29cb7
	I0910 19:38:51.424273     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:51.424273     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:51.424273     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:51.424273     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:51.425449     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:51.922278     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:51.922278     716 round_trippers.go:469] Request Headers:
	I0910 19:38:51.922278     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:51.922278     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:51.925859     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:51.926088     716 round_trippers.go:577] Response Headers:
	I0910 19:38:51.926088     716 round_trippers.go:580]     Audit-Id: f8cd4b23-15ce-4721-be93-0b4eb94451a1
	I0910 19:38:51.926088     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:51.926088     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:51.926088     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:51.926088     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:51.926088     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:52 GMT
	I0910 19:38:51.926307     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:51.926784     716 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:38:52.422196     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:52.422196     716 round_trippers.go:469] Request Headers:
	I0910 19:38:52.422196     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:52.422196     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:52.425837     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:52.426134     716 round_trippers.go:577] Response Headers:
	I0910 19:38:52.426134     716 round_trippers.go:580]     Audit-Id: 83a622de-07e3-44c4-8849-a9c3c37918bf
	I0910 19:38:52.426134     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:52.426134     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:52.426134     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:52.426225     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:52.426294     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:52 GMT
	I0910 19:38:52.426725     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:52.921505     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:52.921505     716 round_trippers.go:469] Request Headers:
	I0910 19:38:52.921505     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:52.921505     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:52.928938     716 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:38:52.928938     716 round_trippers.go:577] Response Headers:
	I0910 19:38:52.928938     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:52.928938     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:52.928938     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:53 GMT
	I0910 19:38:52.928938     716 round_trippers.go:580]     Audit-Id: fdb14510-0a80-46cf-88af-431059bd669b
	I0910 19:38:52.928938     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:52.928938     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:52.928938     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:53.415433     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:53.415500     716 round_trippers.go:469] Request Headers:
	I0910 19:38:53.415500     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:53.415571     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:53.418144     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:38:53.418895     716 round_trippers.go:577] Response Headers:
	I0910 19:38:53.418972     716 round_trippers.go:580]     Audit-Id: 7582cced-f4b3-40d7-bf65-f5b90729e237
	I0910 19:38:53.418972     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:53.418972     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:53.418972     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:53.418972     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:53.418972     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:53 GMT
	I0910 19:38:53.418972     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:53.923734     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:53.923734     716 round_trippers.go:469] Request Headers:
	I0910 19:38:53.923734     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:53.923734     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:53.926539     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:38:53.927498     716 round_trippers.go:577] Response Headers:
	I0910 19:38:53.927498     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:54 GMT
	I0910 19:38:53.927498     716 round_trippers.go:580]     Audit-Id: dc8ad7b6-7489-4148-a346-32c76f63771d
	I0910 19:38:53.927498     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:53.927498     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:53.927498     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:53.927498     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:53.927698     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:53.928067     716 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:38:54.424711     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:54.424711     716 round_trippers.go:469] Request Headers:
	I0910 19:38:54.425140     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:54.425140     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:54.428523     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:54.428665     716 round_trippers.go:577] Response Headers:
	I0910 19:38:54.428665     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:54.428665     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:54.428665     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:54.428665     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:54 GMT
	I0910 19:38:54.428814     716 round_trippers.go:580]     Audit-Id: a0628171-11c4-450c-8254-be3c507e846f
	I0910 19:38:54.428814     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:54.429104     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:54.924502     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:54.924634     716 round_trippers.go:469] Request Headers:
	I0910 19:38:54.924634     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:54.924634     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:54.931893     716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:38:54.931893     716 round_trippers.go:577] Response Headers:
	I0910 19:38:54.931893     716 round_trippers.go:580]     Audit-Id: d66d8972-03bf-41c9-85c0-8f46d2ed056e
	I0910 19:38:54.931893     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:54.931893     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:54.931893     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:54.931893     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:54.931893     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:55 GMT
	I0910 19:38:54.931893     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:55.423287     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:55.423642     716 round_trippers.go:469] Request Headers:
	I0910 19:38:55.423642     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:55.423642     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:55.426354     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:38:55.427359     716 round_trippers.go:577] Response Headers:
	I0910 19:38:55.427427     716 round_trippers.go:580]     Audit-Id: c9a9445a-0970-4bdd-861d-6b2845dc0a98
	I0910 19:38:55.427427     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:55.427427     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:55.427427     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:55.427427     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:55.427427     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:55 GMT
	I0910 19:38:55.427951     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:55.918987     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:55.919065     716 round_trippers.go:469] Request Headers:
	I0910 19:38:55.919065     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:55.919065     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:55.922813     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:55.922813     716 round_trippers.go:577] Response Headers:
	I0910 19:38:55.922813     716 round_trippers.go:580]     Audit-Id: e7404aef-f2a2-49bf-b767-fd0898c37851
	I0910 19:38:55.922941     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:55.922941     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:55.922941     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:55.922941     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:55.922941     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:56 GMT
	I0910 19:38:55.923033     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:56.429062     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:56.429152     716 round_trippers.go:469] Request Headers:
	I0910 19:38:56.429152     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:56.429238     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:56.431423     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:38:56.431423     716 round_trippers.go:577] Response Headers:
	I0910 19:38:56.431423     716 round_trippers.go:580]     Audit-Id: 63b6eb82-6ac6-4417-b1e4-dc5e853b22d9
	I0910 19:38:56.432427     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:56.432457     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:56.432457     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:56.432457     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:56.432457     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:56 GMT
	I0910 19:38:56.432820     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:56.433417     716 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:38:56.928583     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:56.928583     716 round_trippers.go:469] Request Headers:
	I0910 19:38:56.928583     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:56.928583     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:56.932196     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:56.932648     716 round_trippers.go:577] Response Headers:
	I0910 19:38:56.932648     716 round_trippers.go:580]     Audit-Id: 8d580a62-6e4c-4e5c-8406-045f5ca05b7c
	I0910 19:38:56.932739     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:56.932739     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:56.932739     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:56.932806     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:56.932806     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:57 GMT
	I0910 19:38:56.933138     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:57.428433     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:57.428433     716 round_trippers.go:469] Request Headers:
	I0910 19:38:57.428644     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:57.428644     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:57.432462     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:57.432462     716 round_trippers.go:577] Response Headers:
	I0910 19:38:57.432462     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:57.432462     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:57.433496     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:57.433496     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:57 GMT
	I0910 19:38:57.433496     716 round_trippers.go:580]     Audit-Id: 857ead77-a659-4baa-ae7e-153cebbea78b
	I0910 19:38:57.433496     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:57.433644     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:57.928408     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:57.928408     716 round_trippers.go:469] Request Headers:
	I0910 19:38:57.928519     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:57.928519     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:57.931935     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:57.931935     716 round_trippers.go:577] Response Headers:
	I0910 19:38:57.931935     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:57.931935     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:57.931935     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:57.931935     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:57.931935     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:58 GMT
	I0910 19:38:57.931935     716 round_trippers.go:580]     Audit-Id: 5fca1b35-f6fe-4889-a514-6f5a29b526ee
	I0910 19:38:57.932574     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:58.429169     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:58.429169     716 round_trippers.go:469] Request Headers:
	I0910 19:38:58.429169     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:58.429609     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:58.433040     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:58.433040     716 round_trippers.go:577] Response Headers:
	I0910 19:38:58.433040     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:58.433770     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:58.433770     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:58.433770     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:58.433770     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:58 GMT
	I0910 19:38:58.433770     716 round_trippers.go:580]     Audit-Id: 7c1e76df-a957-41ca-a5d6-dc3de578b741
	I0910 19:38:58.434171     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:58.434809     716 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:38:58.929039     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:58.929039     716 round_trippers.go:469] Request Headers:
	I0910 19:38:58.929039     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:58.929039     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:58.936634     716 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:38:58.936634     716 round_trippers.go:577] Response Headers:
	I0910 19:38:58.936634     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:58.936634     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:58.937603     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:58.937603     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:59 GMT
	I0910 19:38:58.937603     716 round_trippers.go:580]     Audit-Id: 96b0f47b-bb39-4e08-92a6-4583442c0912
	I0910 19:38:58.937603     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:58.937603     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:59.429449     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:59.429538     716 round_trippers.go:469] Request Headers:
	I0910 19:38:59.429538     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:59.429623     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:59.432877     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:59.433281     716 round_trippers.go:577] Response Headers:
	I0910 19:38:59.433344     716 round_trippers.go:580]     Audit-Id: b492d65d-c2d1-48b7-8c3d-a7a58235b98f
	I0910 19:38:59.433344     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:59.433344     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:59.433344     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:59.433426     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:59.433426     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:38:59 GMT
	I0910 19:38:59.433692     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:38:59.927349     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:38:59.927425     716 round_trippers.go:469] Request Headers:
	I0910 19:38:59.927501     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:38:59.927501     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:38:59.930917     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:38:59.931149     716 round_trippers.go:577] Response Headers:
	I0910 19:38:59.931149     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:38:59.931149     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:00 GMT
	I0910 19:38:59.931149     716 round_trippers.go:580]     Audit-Id: 07a616a7-bbea-46db-9c2e-3d6bae87d432
	I0910 19:38:59.931149     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:38:59.931149     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:38:59.931235     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:38:59.931235     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:39:00.426055     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:39:00.426470     716 round_trippers.go:469] Request Headers:
	I0910 19:39:00.426470     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:00.426470     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:00.431057     716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:39:00.431057     716 round_trippers.go:577] Response Headers:
	I0910 19:39:00.431057     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:00.431057     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:00.431057     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:00.431057     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:00.431057     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:00 GMT
	I0910 19:39:00.431057     716 round_trippers.go:580]     Audit-Id: 10b4828c-0861-411a-b038-a9ca005a3510
	I0910 19:39:00.431578     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:39:00.926658     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:39:00.926736     716 round_trippers.go:469] Request Headers:
	I0910 19:39:00.926736     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:00.926809     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:00.931834     716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:39:00.931834     716 round_trippers.go:577] Response Headers:
	I0910 19:39:00.931834     716 round_trippers.go:580]     Audit-Id: 076ec109-114c-44bf-b3c9-2682c8fe19d7
	I0910 19:39:00.931834     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:00.931834     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:00.931834     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:00.931834     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:00.931834     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:01 GMT
	I0910 19:39:00.932363     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:39:00.932469     716 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:39:01.429428     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:39:01.429428     716 round_trippers.go:469] Request Headers:
	I0910 19:39:01.429428     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:01.429428     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:01.432976     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:39:01.433044     716 round_trippers.go:577] Response Headers:
	I0910 19:39:01.433093     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:01.433093     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:01.433136     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:01 GMT
	I0910 19:39:01.433136     716 round_trippers.go:580]     Audit-Id: bff5daff-45c6-49fc-85c2-d233c78c583f
	I0910 19:39:01.433136     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:01.433136     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:01.433252     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:39:01.928341     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:39:01.928341     716 round_trippers.go:469] Request Headers:
	I0910 19:39:01.928775     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:01.928775     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:01.932192     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:39:01.932192     716 round_trippers.go:577] Response Headers:
	I0910 19:39:01.932481     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:01.932481     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:01.932481     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:01.932481     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:01.932561     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:02 GMT
	I0910 19:39:01.932561     716 round_trippers.go:580]     Audit-Id: bc1a7bdb-248e-487c-a8ef-8db37091becd
	I0910 19:39:01.932621     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:39:02.426091     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:39:02.426091     716 round_trippers.go:469] Request Headers:
	I0910 19:39:02.426225     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:02.426225     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:02.433349     716 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:39:02.433349     716 round_trippers.go:577] Response Headers:
	I0910 19:39:02.433349     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:02.433349     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:02.433349     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:02.433349     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:02.433349     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:02 GMT
	I0910 19:39:02.433349     716 round_trippers.go:580]     Audit-Id: 449214c1-5cb6-4b48-bcc5-4d09fd437197
	I0910 19:39:02.433349     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:39:02.923487     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:39:02.923575     716 round_trippers.go:469] Request Headers:
	I0910 19:39:02.923575     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:02.923575     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:02.926875     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:39:02.927243     716 round_trippers.go:577] Response Headers:
	I0910 19:39:02.927243     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:02.927243     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:03 GMT
	I0910 19:39:02.927243     716 round_trippers.go:580]     Audit-Id: 794b219e-94d3-48ff-86f1-41b09c6cc640
	I0910 19:39:02.927340     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:02.927340     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:02.927340     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:02.927682     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:39:03.422910     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:39:03.423023     716 round_trippers.go:469] Request Headers:
	I0910 19:39:03.423023     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:03.423023     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:03.426478     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:39:03.426478     716 round_trippers.go:577] Response Headers:
	I0910 19:39:03.426478     716 round_trippers.go:580]     Audit-Id: c1bcab9b-43ba-4032-aca4-a13214b06264
	I0910 19:39:03.426478     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:03.426478     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:03.426478     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:03.426478     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:03.426478     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:03 GMT
	I0910 19:39:03.427474     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"575","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3389 chars]
	I0910 19:39:03.428200     716 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:39:03.919123     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:39:03.919185     716 round_trippers.go:469] Request Headers:
	I0910 19:39:03.919185     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:03.919185     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:03.921706     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:39:03.921706     716 round_trippers.go:577] Response Headers:
	I0910 19:39:03.922740     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:03.922740     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:04 GMT
	I0910 19:39:03.922740     716 round_trippers.go:580]     Audit-Id: 9e62774e-f3bc-4e8b-8b0b-dd6420a4ff9b
	I0910 19:39:03.922740     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:03.922740     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:03.922740     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:03.923143     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"604","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3830 chars]
	I0910 19:39:03.923538     716 node_ready.go:49] node "multinode-629100-m02" has status "Ready":"True"
	I0910 19:39:03.923624     716 node_ready.go:38] duration metric: took 29.5082208s for node "multinode-629100-m02" to be "Ready" ...
	I0910 19:39:03.923624     716 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:39:03.923790     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods
	I0910 19:39:03.923790     716 round_trippers.go:469] Request Headers:
	I0910 19:39:03.923790     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:03.923790     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:03.929692     716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:39:03.929692     716 round_trippers.go:577] Response Headers:
	I0910 19:39:03.929692     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:03.929692     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:04 GMT
	I0910 19:39:03.929692     716 round_trippers.go:580]     Audit-Id: df285e09-bcad-4b80-9e87-d8f9bb8ad82a
	I0910 19:39:03.929692     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:03.929692     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:03.929692     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:03.931701     716 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"604"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"405","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 72619 chars]
	I0910 19:39:03.934777     716 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace to be "Ready" ...
	I0910 19:39:03.934905     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:39:03.934962     716 round_trippers.go:469] Request Headers:
	I0910 19:39:03.935431     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:03.935431     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:03.942055     716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:39:03.942055     716 round_trippers.go:577] Response Headers:
	I0910 19:39:03.942055     716 round_trippers.go:580]     Audit-Id: 5a96676c-1cfa-4355-85ef-f461b0d0b9d7
	I0910 19:39:03.942055     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:03.942055     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:03.942055     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:03.942055     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:03.942055     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:04 GMT
	I0910 19:39:03.942554     716 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"405","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6834 chars]
	I0910 19:39:03.943304     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:39:03.943304     716 round_trippers.go:469] Request Headers:
	I0910 19:39:03.943304     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:03.943304     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:03.945878     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:39:03.945878     716 round_trippers.go:577] Response Headers:
	I0910 19:39:03.945878     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:03.945878     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:03.946884     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:03.946884     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:03.946884     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:04 GMT
	I0910 19:39:03.946884     716 round_trippers.go:580]     Audit-Id: 63fd4557-197c-473e-ba1f-25cc68ead517
	I0910 19:39:03.946924     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"416","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4951 chars]
	I0910 19:39:03.946924     716 pod_ready.go:93] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"True"
	I0910 19:39:03.946924     716 pod_ready.go:82] duration metric: took 12.0806ms for pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace to be "Ready" ...
	I0910 19:39:03.946924     716 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:39:03.947493     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-629100
	I0910 19:39:03.947539     716 round_trippers.go:469] Request Headers:
	I0910 19:39:03.947539     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:03.947578     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:03.949738     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:39:03.949738     716 round_trippers.go:577] Response Headers:
	I0910 19:39:03.949738     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:03.949738     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:03.949738     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:03.949738     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:04 GMT
	I0910 19:39:03.949738     716 round_trippers.go:580]     Audit-Id: e57b1b76-40b9-4248-aad5-38a3eec9c96a
	I0910 19:39:03.949738     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:03.949738     716 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-629100","namespace":"kube-system","uid":"9df7e62a-50ad-4d7e-97eb-e9c494a0892b","resourceVersion":"369","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.210.71:2379","kubernetes.io/config.hash":"0ee82e84ae3f2eed59191657b2917fe8","kubernetes.io/config.mirror":"0ee82e84ae3f2eed59191657b2917fe8","kubernetes.io/config.seen":"2024-09-10T19:35:40.972003382Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6465 chars]
	I0910 19:39:03.950803     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:39:03.950902     716 round_trippers.go:469] Request Headers:
	I0910 19:39:03.950902     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:03.950902     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:03.953701     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:39:03.953701     716 round_trippers.go:577] Response Headers:
	I0910 19:39:03.953701     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:03.953701     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:03.953701     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:03.953701     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:04 GMT
	I0910 19:39:03.953701     716 round_trippers.go:580]     Audit-Id: ae150f74-d598-4288-9ce4-27313381f9e0
	I0910 19:39:03.953701     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:03.953701     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"416","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4951 chars]
	I0910 19:39:03.954321     716 pod_ready.go:93] pod "etcd-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:39:03.954321     716 pod_ready.go:82] duration metric: took 7.3968ms for pod "etcd-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:39:03.954321     716 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:39:03.954401     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-629100
	I0910 19:39:03.954401     716 round_trippers.go:469] Request Headers:
	I0910 19:39:03.954401     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:03.954478     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:03.956176     716 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 19:39:03.956176     716 round_trippers.go:577] Response Headers:
	I0910 19:39:03.956176     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:03.956176     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:03.956176     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:03.956176     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:04 GMT
	I0910 19:39:03.956176     716 round_trippers.go:580]     Audit-Id: 641b6473-119d-4249-936b-da486cb058d5
	I0910 19:39:03.956176     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:03.957046     716 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-629100","namespace":"kube-system","uid":"8dcf96e7-f1c8-4b97-b0a8-e4b79bd7566c","resourceVersion":"357","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.210.71:8443","kubernetes.io/config.hash":"2100bf04dc545540399a042d07adc1da","kubernetes.io/config.mirror":"2100bf04dc545540399a042d07adc1da","kubernetes.io/config.seen":"2024-09-10T19:35:40.972007582Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0910 19:39:03.957508     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:39:03.957571     716 round_trippers.go:469] Request Headers:
	I0910 19:39:03.957571     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:03.957571     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:03.959994     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:39:03.959994     716 round_trippers.go:577] Response Headers:
	I0910 19:39:03.959994     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:03.959994     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:03.959994     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:03.959994     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:03.959994     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:04 GMT
	I0910 19:39:03.959994     716 round_trippers.go:580]     Audit-Id: 6e9519e0-b67b-4641-b699-fed3b4af7a64
	I0910 19:39:03.960261     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"416","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4951 chars]
	I0910 19:39:03.960561     716 pod_ready.go:93] pod "kube-apiserver-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:39:03.960561     716 pod_ready.go:82] duration metric: took 6.2397ms for pod "kube-apiserver-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:39:03.960561     716 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:39:03.960561     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-629100
	I0910 19:39:03.960561     716 round_trippers.go:469] Request Headers:
	I0910 19:39:03.960561     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:03.960561     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:03.978685     716 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0910 19:39:03.978685     716 round_trippers.go:577] Response Headers:
	I0910 19:39:03.978685     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:03.978685     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:03.978685     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:03.978685     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:03.978685     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:04 GMT
	I0910 19:39:03.978685     716 round_trippers.go:580]     Audit-Id: 381eb1b4-15a1-4bc0-8ea8-585065bdf702
	I0910 19:39:03.978927     716 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-629100","namespace":"kube-system","uid":"60adcb0c-808a-477c-9432-83dc8f96d6c0","resourceVersion":"310","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2a7614069b6cbf30e1abf0d1e34a8a71","kubernetes.io/config.mirror":"2a7614069b6cbf30e1abf0d1e34a8a71","kubernetes.io/config.seen":"2024-09-10T19:35:40.972009282Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0910 19:39:03.979486     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:39:03.979486     716 round_trippers.go:469] Request Headers:
	I0910 19:39:03.979541     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:03.979541     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:03.981182     716 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 19:39:03.982111     716 round_trippers.go:577] Response Headers:
	I0910 19:39:03.982111     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:03.982111     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:03.982111     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:03.982111     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:04 GMT
	I0910 19:39:03.982111     716 round_trippers.go:580]     Audit-Id: 0e9b48b9-5d3b-4383-8bc3-8e5ad41295b6
	I0910 19:39:03.982111     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:03.982212     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"416","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4951 chars]
	I0910 19:39:03.982610     716 pod_ready.go:93] pod "kube-controller-manager-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:39:03.982667     716 pod_ready.go:82] duration metric: took 22.1047ms for pod "kube-controller-manager-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:39:03.982667     716 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qqrrg" in "kube-system" namespace to be "Ready" ...
	I0910 19:39:04.122592     716 request.go:632] Waited for 139.4185ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqrrg
	I0910 19:39:04.122834     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqrrg
	I0910 19:39:04.122834     716 round_trippers.go:469] Request Headers:
	I0910 19:39:04.122834     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:04.122951     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:04.126552     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:39:04.126552     716 round_trippers.go:577] Response Headers:
	I0910 19:39:04.126552     716 round_trippers.go:580]     Audit-Id: c47e0a8a-66dc-494e-b9b3-5b437391893e
	I0910 19:39:04.126552     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:04.126552     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:04.126552     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:04.127237     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:04.127237     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:04 GMT
	I0910 19:39:04.127745     716 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qqrrg","generateName":"kube-proxy-","namespace":"kube-system","uid":"1fc7fdda-d5e4-4c72-96c1-2348eb72b491","resourceVersion":"580","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0910 19:39:04.323859     716 request.go:632] Waited for 195.2663ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:39:04.324227     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:39:04.324227     716 round_trippers.go:469] Request Headers:
	I0910 19:39:04.324369     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:04.324417     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:04.327968     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:39:04.328062     716 round_trippers.go:577] Response Headers:
	I0910 19:39:04.328062     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:04.328062     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:04.328062     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:04 GMT
	I0910 19:39:04.328160     716 round_trippers.go:580]     Audit-Id: 77335ddb-4bca-4f4d-8348-909a6919145f
	I0910 19:39:04.328160     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:04.328160     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:04.328430     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"605","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3769 chars]
	I0910 19:39:04.329053     716 pod_ready.go:93] pod "kube-proxy-qqrrg" in "kube-system" namespace has status "Ready":"True"
	I0910 19:39:04.329129     716 pod_ready.go:82] duration metric: took 346.3631ms for pod "kube-proxy-qqrrg" in "kube-system" namespace to be "Ready" ...
	I0910 19:39:04.329145     716 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wqf2d" in "kube-system" namespace to be "Ready" ...
	I0910 19:39:04.528603     716 request.go:632] Waited for 198.6465ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wqf2d
	I0910 19:39:04.528689     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wqf2d
	I0910 19:39:04.528776     716 round_trippers.go:469] Request Headers:
	I0910 19:39:04.528776     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:04.528851     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:04.532486     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:39:04.532817     716 round_trippers.go:577] Response Headers:
	I0910 19:39:04.532817     716 round_trippers.go:580]     Audit-Id: 52ab71a3-9b72-4581-9f98-037132949612
	I0910 19:39:04.532817     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:04.532817     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:04.532817     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:04.532817     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:04.532817     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:04 GMT
	I0910 19:39:04.533030     716 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wqf2d","generateName":"kube-proxy-","namespace":"kube-system","uid":"27e7846e-e506-48e0-96b9-351b3ca91703","resourceVersion":"362","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6194 chars]
	I0910 19:39:04.731280     716 request.go:632] Waited for 197.0107ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:39:04.731617     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:39:04.731676     716 round_trippers.go:469] Request Headers:
	I0910 19:39:04.731676     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:04.731676     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:04.735060     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:39:04.735060     716 round_trippers.go:577] Response Headers:
	I0910 19:39:04.735060     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:04.735060     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:04 GMT
	I0910 19:39:04.735060     716 round_trippers.go:580]     Audit-Id: f0d993d2-ae62-4009-a11b-6ec1212186a4
	I0910 19:39:04.735060     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:04.735560     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:04.735560     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:04.735950     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"416","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4951 chars]
	I0910 19:39:04.736537     716 pod_ready.go:93] pod "kube-proxy-wqf2d" in "kube-system" namespace has status "Ready":"True"
	I0910 19:39:04.736644     716 pod_ready.go:82] duration metric: took 407.4723ms for pod "kube-proxy-wqf2d" in "kube-system" namespace to be "Ready" ...
	I0910 19:39:04.736722     716 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:39:04.920181     716 request.go:632] Waited for 183.1494ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629100
	I0910 19:39:04.920309     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629100
	I0910 19:39:04.920309     716 round_trippers.go:469] Request Headers:
	I0910 19:39:04.920309     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:04.920409     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:04.923113     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:39:04.923113     716 round_trippers.go:577] Response Headers:
	I0910 19:39:04.923773     716 round_trippers.go:580]     Audit-Id: 5fc8f27b-a57e-404b-ba87-fe2d34b6233a
	I0910 19:39:04.923773     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:04.923773     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:04.923864     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:04.923864     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:04.923955     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:05 GMT
	I0910 19:39:04.924246     716 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-629100","namespace":"kube-system","uid":"fe93d1a3-98c4-44c9-b1fb-9d90a97b97c7","resourceVersion":"371","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4f54b0774708cd0189f8ef861ea931f0","kubernetes.io/config.mirror":"4f54b0774708cd0189f8ef861ea931f0","kubernetes.io/config.seen":"2024-09-10T19:35:40.972010383Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0910 19:39:05.122986     716 request.go:632] Waited for 197.5265ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:39:05.123112     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes/multinode-629100
	I0910 19:39:05.123112     716 round_trippers.go:469] Request Headers:
	I0910 19:39:05.123112     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:05.123112     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:05.125773     716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:39:05.125773     716 round_trippers.go:577] Response Headers:
	I0910 19:39:05.126331     716 round_trippers.go:580]     Audit-Id: 3cb36f8c-edb4-4bd6-9c71-e522bd9b111f
	I0910 19:39:05.126331     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:05.126331     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:05.126331     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:05.126420     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:05.126420     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:05 GMT
	I0910 19:39:05.126807     716 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"416","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","fi [truncated 4951 chars]
	I0910 19:39:05.127465     716 pod_ready.go:93] pod "kube-scheduler-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:39:05.127548     716 pod_ready.go:82] duration metric: took 390.8008ms for pod "kube-scheduler-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:39:05.127548     716 pod_ready.go:39] duration metric: took 1.2038451s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:39:05.127657     716 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:39:05.138925     716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:39:05.163411     716 system_svc.go:56] duration metric: took 35.7512ms WaitForService to wait for kubelet
	I0910 19:39:05.163411     716 kubeadm.go:582] duration metric: took 30.9763078s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:39:05.163411     716 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:39:05.323609     716 request.go:632] Waited for 160.1872ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.210.71:8443/api/v1/nodes
	I0910 19:39:05.323718     716 round_trippers.go:463] GET https://172.31.210.71:8443/api/v1/nodes
	I0910 19:39:05.323718     716 round_trippers.go:469] Request Headers:
	I0910 19:39:05.323718     716 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:39:05.323876     716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:39:05.327196     716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:39:05.327196     716 round_trippers.go:577] Response Headers:
	I0910 19:39:05.328141     716 round_trippers.go:580]     Audit-Id: 5ea08432-3f60-4ba1-86e6-3f86dbd62cc8
	I0910 19:39:05.328141     716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:39:05.328175     716 round_trippers.go:580]     Content-Type: application/json
	I0910 19:39:05.328175     716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:39:05.328175     716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:39:05.328175     716 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:39:05 GMT
	I0910 19:39:05.328751     716 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"607"},"items":[{"metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"416","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9765 chars]
	I0910 19:39:05.329857     716 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:39:05.329917     716 node_conditions.go:123] node cpu capacity is 2
	I0910 19:39:05.329978     716 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:39:05.329978     716 node_conditions.go:123] node cpu capacity is 2
	I0910 19:39:05.329978     716 node_conditions.go:105] duration metric: took 166.5562ms to run NodePressure ...
	I0910 19:39:05.330039     716 start.go:241] waiting for startup goroutines ...
	I0910 19:39:05.330102     716 start.go:255] writing updated cluster config ...
	I0910 19:39:05.340943     716 ssh_runner.go:195] Run: rm -f paused
	I0910 19:39:05.463385     716 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 19:39:05.467152     716 out.go:177] * Done! kubectl is now configured to use "multinode-629100" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 10 19:36:07 multinode-629100 dockerd[1432]: time="2024-09-10T19:36:07.425761836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:36:07 multinode-629100 dockerd[1432]: time="2024-09-10T19:36:07.433229513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 19:36:07 multinode-629100 dockerd[1432]: time="2024-09-10T19:36:07.436348512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 19:36:07 multinode-629100 dockerd[1432]: time="2024-09-10T19:36:07.436505722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:36:07 multinode-629100 dockerd[1432]: time="2024-09-10T19:36:07.437131262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:36:07 multinode-629100 cri-dockerd[1327]: time="2024-09-10T19:36:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3a4b56ccc3790e80eff91916cb0028edfe7f5d159c231ddc0a45485daa7fd84f/resolv.conf as [nameserver 172.31.208.1]"
	Sep 10 19:36:07 multinode-629100 cri-dockerd[1327]: time="2024-09-10T19:36:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bf116f91589fc33217cb812686068a83016df92bf9f0367c0675a959db050a8a/resolv.conf as [nameserver 172.31.208.1]"
	Sep 10 19:36:07 multinode-629100 dockerd[1432]: time="2024-09-10T19:36:07.750217569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 19:36:07 multinode-629100 dockerd[1432]: time="2024-09-10T19:36:07.750297374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 19:36:07 multinode-629100 dockerd[1432]: time="2024-09-10T19:36:07.750310675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:36:07 multinode-629100 dockerd[1432]: time="2024-09-10T19:36:07.750418582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:36:07 multinode-629100 dockerd[1432]: time="2024-09-10T19:36:07.843536474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 19:36:07 multinode-629100 dockerd[1432]: time="2024-09-10T19:36:07.843702684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 19:36:07 multinode-629100 dockerd[1432]: time="2024-09-10T19:36:07.843722186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:36:07 multinode-629100 dockerd[1432]: time="2024-09-10T19:36:07.845190682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:39:28 multinode-629100 dockerd[1432]: time="2024-09-10T19:39:28.035687018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 19:39:28 multinode-629100 dockerd[1432]: time="2024-09-10T19:39:28.035802226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 19:39:28 multinode-629100 dockerd[1432]: time="2024-09-10T19:39:28.035820227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:39:28 multinode-629100 dockerd[1432]: time="2024-09-10T19:39:28.036070744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:39:28 multinode-629100 cri-dockerd[1327]: time="2024-09-10T19:39:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ea5e1070e7deaae3222a47ad56950b84b3bec1951593b0a9c4c94e20ef527e03/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 10 19:39:29 multinode-629100 cri-dockerd[1327]: time="2024-09-10T19:39:29Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Sep 10 19:39:29 multinode-629100 dockerd[1432]: time="2024-09-10T19:39:29.828903550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 19:39:29 multinode-629100 dockerd[1432]: time="2024-09-10T19:39:29.828992557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 19:39:29 multinode-629100 dockerd[1432]: time="2024-09-10T19:39:29.829014858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:39:29 multinode-629100 dockerd[1432]: time="2024-09-10T19:39:29.829158068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b1a88f7f52270       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   46 seconds ago      Running             busybox                   0                   ea5e1070e7dea       busybox-7dff88458-lzs87
	039fd49f157a9       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   bf116f91589fc       coredns-6f6b679f8f-srtv8
	35f4bfd5434b1       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   3a4b56ccc3790       storage-provisioner
	33f88ed7aee25       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              4 minutes ago       Running             kindnet-cni               0                   1d92603202b00       kindnet-lj2v2
	85b03f4986715       ad83b2ca7b09e                                                                                         4 minutes ago       Running             kube-proxy                0                   4e550827f00f7       kube-proxy-wqf2d
	76702d5d897eb       2e96e5913fc06                                                                                         4 minutes ago       Running             etcd                      0                   f85e1c01f68df       etcd-multinode-629100
	ea8d0b0af86de       604f5db92eaa8                                                                                         4 minutes ago       Running             kube-apiserver            0                   d3ab7c79ce4bf       kube-apiserver-multinode-629100
	5cb559fed2d8a       1766f54c897f0                                                                                         4 minutes ago       Running             kube-scheduler            0                   49d9c6949234d       kube-scheduler-multinode-629100
	ea7220d439d1b       045733566833c                                                                                         4 minutes ago       Running             kube-controller-manager   0                   db7037ca07a46       kube-controller-manager-multinode-629100
	
	
	==> coredns [039fd49f157a] <==
	[INFO] 10.244.1.2:49869 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098807s
	[INFO] 10.244.0.3:34449 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176112s
	[INFO] 10.244.0.3:49423 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000092007s
	[INFO] 10.244.0.3:43701 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095107s
	[INFO] 10.244.0.3:51536 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000048303s
	[INFO] 10.244.0.3:59362 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00028942s
	[INFO] 10.244.0.3:37417 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014271s
	[INFO] 10.244.0.3:50609 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077805s
	[INFO] 10.244.0.3:45492 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014891s
	[INFO] 10.244.1.2:47303 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100307s
	[INFO] 10.244.1.2:50959 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013871s
	[INFO] 10.244.1.2:34061 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064904s
	[INFO] 10.244.1.2:33504 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059204s
	[INFO] 10.244.0.3:44472 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014991s
	[INFO] 10.244.0.3:51126 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130209s
	[INFO] 10.244.0.3:35880 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062805s
	[INFO] 10.244.0.3:47290 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114308s
	[INFO] 10.244.1.2:59801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127909s
	[INFO] 10.244.1.2:44820 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105107s
	[INFO] 10.244.1.2:51097 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000169412s
	[INFO] 10.244.1.2:50721 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136709s
	[INFO] 10.244.0.3:48616 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000315622s
	[INFO] 10.244.0.3:45256 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000171611s
	[INFO] 10.244.0.3:51021 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000077906s
	[INFO] 10.244.0.3:42471 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135209s
	
	
	==> describe nodes <==
	Name:               multinode-629100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-629100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=multinode-629100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T19_35_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 19:35:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-629100
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 19:40:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 19:39:45 +0000   Tue, 10 Sep 2024 19:35:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 19:39:45 +0000   Tue, 10 Sep 2024 19:35:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 19:39:45 +0000   Tue, 10 Sep 2024 19:35:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 19:39:45 +0000   Tue, 10 Sep 2024 19:36:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.210.71
	  Hostname:    multinode-629100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf2721342d3a4fcbae368702c8d1fd0d
	  System UUID:                e294be3b-926e-3f4f-8147-8c2e1d6d31e8
	  Boot ID:                    03c35675-898a-498e-98be-5dd1c73083bf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lzs87                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 coredns-6f6b679f8f-srtv8                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m29s
	  kube-system                 etcd-multinode-629100                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m34s
	  kube-system                 kindnet-lj2v2                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m29s
	  kube-system                 kube-apiserver-multinode-629100             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 kube-controller-manager-multinode-629100    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 kube-proxy-wqf2d                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-scheduler-multinode-629100             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m27s  kube-proxy       
	  Normal  Starting                 4m35s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m34s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m34s  kubelet          Node multinode-629100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m34s  kubelet          Node multinode-629100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m34s  kubelet          Node multinode-629100 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m30s  node-controller  Node multinode-629100 event: Registered Node multinode-629100 in Controller
	  Normal  NodeReady                4m9s   kubelet          Node multinode-629100 status is now: NodeReady
	
	
	Name:               multinode-629100-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-629100-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=multinode-629100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T19_38_34_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 19:38:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-629100-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 19:40:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 19:39:34 +0000   Tue, 10 Sep 2024 19:38:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 19:39:34 +0000   Tue, 10 Sep 2024 19:38:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 19:39:34 +0000   Tue, 10 Sep 2024 19:38:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 19:39:34 +0000   Tue, 10 Sep 2024 19:39:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.209.0
	  Hostname:    multinode-629100-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 2f8add425db64a84b0aebc0c59875a53
	  System UUID:                0fc9d8ea-7869-bd42-95ee-012842e5540a
	  Boot ID:                    c89d9f92-70f3-4f33-ae15-6686861d0a1b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7c4qt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kindnet-5crht              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      102s
	  kube-system                 kube-proxy-qqrrg           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 91s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  102s (x2 over 102s)  kubelet          Node multinode-629100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x2 over 102s)  kubelet          Node multinode-629100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x2 over 102s)  kubelet          Node multinode-629100-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  102s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           100s                 node-controller  Node multinode-629100-m02 event: Registered Node multinode-629100-m02 in Controller
	  Normal  NodeReady                71s                  kubelet          Node multinode-629100-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep10 19:34] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.187438] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[Sep10 19:35] systemd-fstab-generator[1002]: Ignoring "noauto" option for root device
	[  +0.099423] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.478112] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	[  +0.176720] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	[  +0.207694] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	[  +2.785737] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.183991] systemd-fstab-generator[1291]: Ignoring "noauto" option for root device
	[  +0.183864] systemd-fstab-generator[1304]: Ignoring "noauto" option for root device
	[  +0.256720] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[ +11.238380] systemd-fstab-generator[1418]: Ignoring "noauto" option for root device
	[  +0.088703] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.628385] systemd-fstab-generator[1665]: Ignoring "noauto" option for root device
	[  +6.139160] systemd-fstab-generator[1812]: Ignoring "noauto" option for root device
	[  +0.086763] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.026552] systemd-fstab-generator[2219]: Ignoring "noauto" option for root device
	[  +0.110660] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.435237] systemd-fstab-generator[2323]: Ignoring "noauto" option for root device
	[  +0.205159] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.739603] kauditd_printk_skb: 51 callbacks suppressed
	[Sep10 19:38] hrtimer: interrupt took 2064237 ns
	[Sep10 19:39] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [76702d5d897e] <==
	{"level":"info","ts":"2024-09-10T19:35:36.317681Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T19:35:36.318707Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T19:35:36.319778Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-10T19:35:36.320034Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T19:35:36.320061Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-10T19:35:36.320231Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8392dc51522b279d","local-member-id":"1d523ecf11423acf","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T19:35:36.320798Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T19:35:36.320995Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T19:35:36.321129Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T19:35:36.321980Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.31.210.71:2379"}
	{"level":"warn","ts":"2024-09-10T19:35:53.355968Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.018817ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T19:35:53.356139Z","caller":"traceutil/trace.go:171","msg":"trace[1183528053] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:372; }","duration":"107.211829ms","start":"2024-09-10T19:35:53.248911Z","end":"2024-09-10T19:35:53.356122Z","steps":["trace[1183528053] 'range keys from in-memory index tree'  (duration: 106.850307ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T19:35:53.421632Z","caller":"traceutil/trace.go:171","msg":"trace[709244927] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"112.700474ms","start":"2024-09-10T19:35:53.308917Z","end":"2024-09-10T19:35:53.421617Z","steps":["trace[709244927] 'process raft request'  (duration: 112.504161ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T19:35:53.658623Z","caller":"traceutil/trace.go:171","msg":"trace[1959871682] linearizableReadLoop","detail":"{readStateIndex:385; appliedIndex:384; }","duration":"206.49696ms","start":"2024-09-10T19:35:53.452107Z","end":"2024-09-10T19:35:53.658604Z","steps":["trace[1959871682] 'read index received'  (duration: 148.036991ms)","trace[1959871682] 'applied index is now lower than readState.Index'  (duration: 58.459169ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-10T19:35:53.659318Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.159903ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T19:35:53.659621Z","caller":"traceutil/trace.go:171","msg":"trace[1171856607] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:374; }","duration":"207.460922ms","start":"2024-09-10T19:35:53.452102Z","end":"2024-09-10T19:35:53.659563Z","steps":["trace[1171856607] 'agreement among raft nodes before linearized reading'  (duration: 206.873785ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T19:35:53.660539Z","caller":"traceutil/trace.go:171","msg":"trace[305283066] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"229.90433ms","start":"2024-09-10T19:35:53.430622Z","end":"2024-09-10T19:35:53.660527Z","steps":["trace[305283066] 'process raft request'  (duration: 169.538241ms)","trace[305283066] 'compare'  (duration: 57.858032ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-10T19:35:53.861678Z","caller":"traceutil/trace.go:171","msg":"trace[998878646] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"188.155309ms","start":"2024-09-10T19:35:53.673507Z","end":"2024-09-10T19:35:53.861662Z","steps":["trace[998878646] 'process raft request'  (duration: 184.119756ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T19:38:37.439024Z","caller":"traceutil/trace.go:171","msg":"trace[651407674] transaction","detail":"{read_only:false; response_revision:565; number_of_response:1; }","duration":"163.899896ms","start":"2024-09-10T19:38:37.275110Z","end":"2024-09-10T19:38:37.439010Z","steps":["trace[651407674] 'process raft request'  (duration: 163.799489ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T19:38:37.666172Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.032129ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T19:38:37.666298Z","caller":"traceutil/trace.go:171","msg":"trace[1186547187] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:565; }","duration":"214.19414ms","start":"2024-09-10T19:38:37.452092Z","end":"2024-09-10T19:38:37.666286Z","steps":["trace[1186547187] 'range keys from in-memory index tree'  (duration: 214.015427ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T19:38:40.136267Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.491277ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-10T19:38:40.136334Z","caller":"traceutil/trace.go:171","msg":"trace[1024917045] range","detail":"{range_begin:/registry/deployments/; range_end:/registry/deployments0; response_count:0; response_revision:569; }","duration":"112.574982ms","start":"2024-09-10T19:38:40.023746Z","end":"2024-09-10T19:38:40.136322Z","steps":["trace[1024917045] 'count revisions from in-memory index tree'  (duration: 112.452774ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T19:38:49.421976Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.704078ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-629100-m02\" ","response":"range_response_count:1 size:3140"}
	{"level":"info","ts":"2024-09-10T19:38:49.422626Z","caller":"traceutil/trace.go:171","msg":"trace[844364357] range","detail":"{range_begin:/registry/minions/multinode-629100-m02; range_end:; response_count:1; response_revision:584; }","duration":"274.33002ms","start":"2024-09-10T19:38:49.148252Z","end":"2024-09-10T19:38:49.422582Z","steps":["trace[844364357] 'range keys from in-memory index tree'  (duration: 273.539168ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:40:15 up 6 min,  0 users,  load average: 0.18, 0.32, 0.18
	Linux multinode-629100 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [33f88ed7aee2] <==
	I0910 19:39:15.153365       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	I0910 19:39:25.159207       1 main.go:295] Handling node with IPs: map[172.31.210.71:{}]
	I0910 19:39:25.159318       1 main.go:299] handling current node
	I0910 19:39:25.159337       1 main.go:295] Handling node with IPs: map[172.31.209.0:{}]
	I0910 19:39:25.159357       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	I0910 19:39:35.152688       1 main.go:295] Handling node with IPs: map[172.31.210.71:{}]
	I0910 19:39:35.152975       1 main.go:299] handling current node
	I0910 19:39:35.153020       1 main.go:295] Handling node with IPs: map[172.31.209.0:{}]
	I0910 19:39:35.153028       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	I0910 19:39:45.152237       1 main.go:295] Handling node with IPs: map[172.31.209.0:{}]
	I0910 19:39:45.152338       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	I0910 19:39:45.152907       1 main.go:295] Handling node with IPs: map[172.31.210.71:{}]
	I0910 19:39:45.153001       1 main.go:299] handling current node
	I0910 19:39:55.152906       1 main.go:295] Handling node with IPs: map[172.31.209.0:{}]
	I0910 19:39:55.152979       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	I0910 19:39:55.153114       1 main.go:295] Handling node with IPs: map[172.31.210.71:{}]
	I0910 19:39:55.153156       1 main.go:299] handling current node
	I0910 19:40:05.157937       1 main.go:295] Handling node with IPs: map[172.31.210.71:{}]
	I0910 19:40:05.158036       1 main.go:299] handling current node
	I0910 19:40:05.158054       1 main.go:295] Handling node with IPs: map[172.31.209.0:{}]
	I0910 19:40:05.158063       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	I0910 19:40:15.161457       1 main.go:295] Handling node with IPs: map[172.31.210.71:{}]
	I0910 19:40:15.161547       1 main.go:299] handling current node
	I0910 19:40:15.161563       1 main.go:295] Handling node with IPs: map[172.31.209.0:{}]
	I0910 19:40:15.161569       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [ea8d0b0af86d] <==
	I0910 19:35:38.684626       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0910 19:35:38.695309       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0910 19:35:38.695525       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0910 19:35:39.723151       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0910 19:35:39.805006       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0910 19:35:39.918614       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0910 19:35:39.930883       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.31.210.71]
	I0910 19:35:39.932124       1 controller.go:615] quota admission added evaluator for: endpoints
	I0910 19:35:39.945132       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0910 19:35:40.749667       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0910 19:35:41.022209       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0910 19:35:41.051721       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0910 19:35:41.070448       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0910 19:35:46.342276       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0910 19:35:46.514766       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0910 19:39:33.805559       1 conn.go:339] Error on socket receive: read tcp 172.31.210.71:8443->172.31.208.1:65434: use of closed network connection
	E0910 19:39:34.207424       1 conn.go:339] Error on socket receive: read tcp 172.31.210.71:8443->172.31.208.1:65436: use of closed network connection
	E0910 19:39:34.689864       1 conn.go:339] Error on socket receive: read tcp 172.31.210.71:8443->172.31.208.1:65438: use of closed network connection
	E0910 19:39:35.104884       1 conn.go:339] Error on socket receive: read tcp 172.31.210.71:8443->172.31.208.1:65440: use of closed network connection
	E0910 19:39:35.514324       1 conn.go:339] Error on socket receive: read tcp 172.31.210.71:8443->172.31.208.1:65442: use of closed network connection
	E0910 19:39:35.932898       1 conn.go:339] Error on socket receive: read tcp 172.31.210.71:8443->172.31.208.1:65444: use of closed network connection
	E0910 19:39:36.680857       1 conn.go:339] Error on socket receive: read tcp 172.31.210.71:8443->172.31.208.1:65447: use of closed network connection
	E0910 19:39:47.088246       1 conn.go:339] Error on socket receive: read tcp 172.31.210.71:8443->172.31.208.1:65449: use of closed network connection
	E0910 19:39:47.487001       1 conn.go:339] Error on socket receive: read tcp 172.31.210.71:8443->172.31.208.1:65452: use of closed network connection
	E0910 19:39:57.913857       1 conn.go:339] Error on socket receive: read tcp 172.31.210.71:8443->172.31.208.1:65454: use of closed network connection
	
	
	==> kube-controller-manager [ea7220d439d1] <==
	I0910 19:36:10.692324       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0910 19:36:12.143797       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100"
	I0910 19:38:33.600039       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-629100-m02\" does not exist"
	I0910 19:38:33.633360       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-629100-m02" podCIDRs=["10.244.1.0/24"]
	I0910 19:38:33.633533       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m02"
	I0910 19:38:33.633563       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m02"
	I0910 19:38:33.934317       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m02"
	I0910 19:38:34.406175       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m02"
	I0910 19:38:35.716615       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-629100-m02"
	I0910 19:38:35.771000       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m02"
	I0910 19:38:44.061194       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m02"
	I0910 19:39:04.148950       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m02"
	I0910 19:39:04.149005       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-629100-m02"
	I0910 19:39:04.168673       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m02"
	I0910 19:39:05.741912       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m02"
	I0910 19:39:27.242665       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="66.477907ms"
	I0910 19:39:27.273637       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.850145ms"
	I0910 19:39:27.288736       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.043997ms"
	I0910 19:39:27.289327       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.003µs"
	I0910 19:39:30.720826       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="24.186264ms"
	I0910 19:39:30.720982       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.803µs"
	I0910 19:39:31.437677       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.334173ms"
	I0910 19:39:31.438798       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="84.106µs"
	I0910 19:39:35.003890       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m02"
	I0910 19:39:45.887024       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100"
	
	
	==> kube-proxy [85b03f498671] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 19:35:47.926887       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 19:35:47.936949       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.31.210.71"]
	E0910 19:35:47.937088       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 19:35:47.985558       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 19:35:47.985667       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 19:35:47.985694       1 server_linux.go:169] "Using iptables Proxier"
	I0910 19:35:47.989351       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 19:35:47.989836       1 server.go:483] "Version info" version="v1.31.0"
	I0910 19:35:47.989943       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 19:35:47.992068       1 config.go:197] "Starting service config controller"
	I0910 19:35:47.994045       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 19:35:47.994294       1 config.go:326] "Starting node config controller"
	I0910 19:35:47.994439       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 19:35:47.993518       1 config.go:104] "Starting endpoint slice config controller"
	I0910 19:35:47.996484       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 19:35:48.095182       1 shared_informer.go:320] Caches are synced for service config
	I0910 19:35:48.095444       1 shared_informer.go:320] Caches are synced for node config
	I0910 19:35:48.097751       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5cb559fed2d8] <==
	W0910 19:35:38.864282       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0910 19:35:38.864572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:38.875237       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0910 19:35:38.875432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:38.900948       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0910 19:35:38.900977       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:38.957305       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0910 19:35:38.957506       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0910 19:35:38.997653       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0910 19:35:38.997837       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:39.004298       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0910 19:35:39.004563       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:39.017869       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0910 19:35:39.017920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:39.089188       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 19:35:39.089469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:39.288341       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 19:35:39.288858       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:39.326675       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0910 19:35:39.327101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:39.349957       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0910 19:35:39.350170       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:39.392655       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0910 19:35:39.392930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0910 19:35:40.833153       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 19:36:06 multinode-629100 kubelet[2227]: I0910 19:36:06.875798    2227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/511e6e52-fc2a-4562-9903-42ee1f2e0a2d-tmp\") pod \"storage-provisioner\" (UID: \"511e6e52-fc2a-4562-9903-42ee1f2e0a2d\") " pod="kube-system/storage-provisioner"
	Sep 10 19:36:08 multinode-629100 kubelet[2227]: I0910 19:36:08.050117    2227 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.050089099 podStartE2EDuration="15.050089099s" podCreationTimestamp="2024-09-10 19:35:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-10 19:36:08.033032108 +0000 UTC m=+27.198911364" watchObservedRunningTime="2024-09-10 19:36:08.050089099 +0000 UTC m=+27.215968355"
	Sep 10 19:36:41 multinode-629100 kubelet[2227]: E0910 19:36:41.097471    2227 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 19:36:41 multinode-629100 kubelet[2227]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 19:36:41 multinode-629100 kubelet[2227]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 19:36:41 multinode-629100 kubelet[2227]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 19:36:41 multinode-629100 kubelet[2227]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 19:37:41 multinode-629100 kubelet[2227]: E0910 19:37:41.098298    2227 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 19:37:41 multinode-629100 kubelet[2227]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 19:37:41 multinode-629100 kubelet[2227]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 19:37:41 multinode-629100 kubelet[2227]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 19:37:41 multinode-629100 kubelet[2227]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 19:38:41 multinode-629100 kubelet[2227]: E0910 19:38:41.098410    2227 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 19:38:41 multinode-629100 kubelet[2227]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 19:38:41 multinode-629100 kubelet[2227]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 19:38:41 multinode-629100 kubelet[2227]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 19:38:41 multinode-629100 kubelet[2227]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 19:39:27 multinode-629100 kubelet[2227]: I0910 19:39:27.232577    2227 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-srtv8" podStartSLOduration=221.232556481 podStartE2EDuration="3m41.232556481s" podCreationTimestamp="2024-09-10 19:35:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-10 19:36:08.057856341 +0000 UTC m=+27.223735497" watchObservedRunningTime="2024-09-10 19:39:27.232556481 +0000 UTC m=+226.398435637"
	Sep 10 19:39:27 multinode-629100 kubelet[2227]: I0910 19:39:27.421787    2227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpkh7\" (UniqueName: \"kubernetes.io/projected/69a44022-87e6-4d09-8fa6-c39632e1c3fe-kube-api-access-gpkh7\") pod \"busybox-7dff88458-lzs87\" (UID: \"69a44022-87e6-4d09-8fa6-c39632e1c3fe\") " pod="default/busybox-7dff88458-lzs87"
	Sep 10 19:39:35 multinode-629100 kubelet[2227]: E0910 19:39:35.933288    2227 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:47998->127.0.0.1:41193: write tcp 127.0.0.1:47998->127.0.0.1:41193: write: broken pipe
	Sep 10 19:39:41 multinode-629100 kubelet[2227]: E0910 19:39:41.097447    2227 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 19:39:41 multinode-629100 kubelet[2227]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 19:39:41 multinode-629100 kubelet[2227]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 19:39:41 multinode-629100 kubelet[2227]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 19:39:41 multinode-629100 kubelet[2227]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-629100 -n multinode-629100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-629100 -n multinode-629100: (10.7495402s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-629100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (52.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (557.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-629100
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-629100
E0910 19:54:10.857845    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-629100: (1m32.9005648s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-629100 --wait=true -v=8 --alsologtostderr
E0910 19:56:03.493644    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 19:57:59.956419    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 19:59:10.887461    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-629100 --wait=true -v=8 --alsologtostderr: (7m12.4631262s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-629100
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-629100	172.31.210.71
multinode-629100-m02	172.31.209.0
multinode-629100-m03	172.31.210.110

                                                
                                                
After restart: multinode-629100	172.31.215.172
multinode-629100-m02	172.31.210.34
multinode-629100-m03	172.31.214.220
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-629100 -n multinode-629100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-629100 -n multinode-629100: (10.5509955s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 logs -n 25: (8.0068026s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | multinode-629100 ssh -n                                                                                                  | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:46 UTC | 10 Sep 24 19:46 UTC |
	|         | multinode-629100-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-629100 cp multinode-629100-m02:/home/docker/cp-test.txt                                                        | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:46 UTC | 10 Sep 24 19:46 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile1125635233\001\cp-test_multinode-629100-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n                                                                                                  | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:46 UTC | 10 Sep 24 19:46 UTC |
	|         | multinode-629100-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-629100 cp multinode-629100-m02:/home/docker/cp-test.txt                                                        | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:46 UTC | 10 Sep 24 19:46 UTC |
	|         | multinode-629100:/home/docker/cp-test_multinode-629100-m02_multinode-629100.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n                                                                                                  | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:46 UTC | 10 Sep 24 19:47 UTC |
	|         | multinode-629100-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n multinode-629100 sudo cat                                                                        | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:47 UTC | 10 Sep 24 19:47 UTC |
	|         | /home/docker/cp-test_multinode-629100-m02_multinode-629100.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-629100 cp multinode-629100-m02:/home/docker/cp-test.txt                                                        | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:47 UTC | 10 Sep 24 19:47 UTC |
	|         | multinode-629100-m03:/home/docker/cp-test_multinode-629100-m02_multinode-629100-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n                                                                                                  | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:47 UTC | 10 Sep 24 19:47 UTC |
	|         | multinode-629100-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n multinode-629100-m03 sudo cat                                                                    | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:47 UTC | 10 Sep 24 19:47 UTC |
	|         | /home/docker/cp-test_multinode-629100-m02_multinode-629100-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-629100 cp testdata\cp-test.txt                                                                                 | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:47 UTC | 10 Sep 24 19:47 UTC |
	|         | multinode-629100-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n                                                                                                  | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:47 UTC | 10 Sep 24 19:48 UTC |
	|         | multinode-629100-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-629100 cp multinode-629100-m03:/home/docker/cp-test.txt                                                        | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:48 UTC | 10 Sep 24 19:48 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile1125635233\001\cp-test_multinode-629100-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n                                                                                                  | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:48 UTC | 10 Sep 24 19:48 UTC |
	|         | multinode-629100-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-629100 cp multinode-629100-m03:/home/docker/cp-test.txt                                                        | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:48 UTC | 10 Sep 24 19:48 UTC |
	|         | multinode-629100:/home/docker/cp-test_multinode-629100-m03_multinode-629100.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n                                                                                                  | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:48 UTC | 10 Sep 24 19:48 UTC |
	|         | multinode-629100-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n multinode-629100 sudo cat                                                                        | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:48 UTC | 10 Sep 24 19:48 UTC |
	|         | /home/docker/cp-test_multinode-629100-m03_multinode-629100.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-629100 cp multinode-629100-m03:/home/docker/cp-test.txt                                                        | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:48 UTC | 10 Sep 24 19:49 UTC |
	|         | multinode-629100-m02:/home/docker/cp-test_multinode-629100-m03_multinode-629100-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n                                                                                                  | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:49 UTC | 10 Sep 24 19:49 UTC |
	|         | multinode-629100-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n multinode-629100-m02 sudo cat                                                                    | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:49 UTC | 10 Sep 24 19:49 UTC |
	|         | /home/docker/cp-test_multinode-629100-m03_multinode-629100-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-629100 node stop m03                                                                                           | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:49 UTC | 10 Sep 24 19:49 UTC |
	| node    | multinode-629100 node start                                                                                              | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:50 UTC | 10 Sep 24 19:52 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-629100                                                                                                 | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:53 UTC |                     |
	| stop    | -p multinode-629100                                                                                                      | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:53 UTC | 10 Sep 24 19:54 UTC |
	| start   | -p multinode-629100                                                                                                      | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:54 UTC | 10 Sep 24 20:02 UTC |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	| node    | list -p multinode-629100                                                                                                 | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 20:02 UTC |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 19:54:51
	Running on machine: minikube5
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 19:54:51.349718    8968 out.go:345] Setting OutFile to fd 532 ...
	I0910 19:54:51.393905    8968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 19:54:51.393905    8968 out.go:358] Setting ErrFile to fd 1028...
	I0910 19:54:51.393905    8968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 19:54:51.409268    8968 out.go:352] Setting JSON to false
	I0910 19:54:51.411441    8968 start.go:129] hostinfo: {"hostname":"minikube5","uptime":109354,"bootTime":1725888737,"procs":182,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4842 Build 19045.4842","kernelVersion":"10.0.19045.4842 Build 19045.4842","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0910 19:54:51.411441    8968 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 19:54:51.453238    8968 out.go:177] * [multinode-629100] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	I0910 19:54:51.644576    8968 notify.go:220] Checking for updates...
	I0910 19:54:51.650331    8968 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 19:54:51.714764    8968 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 19:54:51.727209    8968 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0910 19:54:51.757677    8968 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 19:54:51.771287    8968 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 19:54:51.780509    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:54:51.780509    8968 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 19:54:56.560698    8968 out.go:177] * Using the hyperv driver based on existing profile
	I0910 19:54:56.569262    8968 start.go:297] selected driver: hyperv
	I0910 19:54:56.569262    8968 start.go:901] validating driver "hyperv" against &{Name:multinode-629100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.210.71 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.209.0 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.210.110 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fa
lse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:54:56.569262    8968 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 19:54:56.617787    8968 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:54:56.617787    8968 cni.go:84] Creating CNI manager for ""
	I0910 19:54:56.617787    8968 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0910 19:54:56.618785    8968 start.go:340] cluster config:
	{Name:multinode-629100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-629100 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.210.71 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.209.0 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.210.110 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner
:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:54:56.618785    8968 iso.go:125] acquiring lock: {Name:mkb663b978f90cccd76348bddb3ea027f6ac4252 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 19:54:56.757103    8968 out.go:177] * Starting "multinode-629100" primary control-plane node in "multinode-629100" cluster
	I0910 19:54:56.798051    8968 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 19:54:56.798507    8968 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0910 19:54:56.798617    8968 cache.go:56] Caching tarball of preloaded images
	I0910 19:54:56.799037    8968 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0910 19:54:56.799494    8968 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 19:54:56.799494    8968 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json ...
	I0910 19:54:56.802428    8968 start.go:360] acquireMachinesLock for multinode-629100: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 19:54:56.802734    8968 start.go:364] duration metric: took 169.4µs to acquireMachinesLock for "multinode-629100"
	I0910 19:54:56.803023    8968 start.go:96] Skipping create...Using existing machine configuration
	I0910 19:54:56.803023    8968 fix.go:54] fixHost starting: 
	I0910 19:54:56.803786    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:54:59.113554    8968 main.go:141] libmachine: [stdout =====>] : Off
	
	I0910 19:54:59.113554    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:54:59.114584    8968 fix.go:112] recreateIfNeeded on multinode-629100: state=Stopped err=<nil>
	W0910 19:54:59.114637    8968 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 19:54:59.202325    8968 out.go:177] * Restarting existing hyperv VM for "multinode-629100" ...
	I0910 19:54:59.211615    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-629100
	I0910 19:55:02.248485    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:55:02.248574    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:02.248574    8968 main.go:141] libmachine: Waiting for host to start...
	I0910 19:55:02.248641    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:04.191552    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:04.191552    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:04.191552    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:06.358788    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:55:06.358980    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:07.362604    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:09.251843    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:09.252249    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:09.252249    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:11.454004    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:55:11.454158    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:12.459240    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:14.390705    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:14.390705    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:14.390705    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:16.600240    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:55:16.600240    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:17.603127    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:19.541334    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:19.541848    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:19.541969    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:21.799849    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:55:21.799849    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:22.812796    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:24.768148    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:24.768148    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:24.768148    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:27.088196    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:55:27.088196    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:27.091273    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:28.970540    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:28.970540    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:28.971322    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:31.202478    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:55:31.202478    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:31.203629    8968 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json ...
	I0910 19:55:31.205918    8968 machine.go:93] provisionDockerMachine start ...
	I0910 19:55:31.206095    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:33.017145    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:33.017145    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:33.017759    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:35.222003    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:55:35.222257    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:35.226240    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:55:35.226873    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.172 22 <nil> <nil>}
	I0910 19:55:35.226873    8968 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 19:55:35.356834    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 19:55:35.356914    8968 buildroot.go:166] provisioning hostname "multinode-629100"
	I0910 19:55:35.357030    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:37.157319    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:37.157319    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:37.157992    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:39.326498    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:55:39.327278    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:39.330241    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:55:39.330819    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.172 22 <nil> <nil>}
	I0910 19:55:39.330819    8968 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-629100 && echo "multinode-629100" | sudo tee /etc/hostname
	I0910 19:55:39.470925    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-629100
	
	I0910 19:55:39.471025    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:41.311801    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:41.311801    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:41.312078    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:43.518799    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:55:43.518799    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:43.522656    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:55:43.523330    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.172 22 <nil> <nil>}
	I0910 19:55:43.523330    8968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-629100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-629100/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-629100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 19:55:43.666796    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 19:55:43.667002    8968 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0910 19:55:43.667002    8968 buildroot.go:174] setting up certificates
	I0910 19:55:43.667002    8968 provision.go:84] configureAuth start
	I0910 19:55:43.667175    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:45.490903    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:45.490903    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:45.491280    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:47.672838    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:55:47.673669    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:47.673669    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:49.478125    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:49.478125    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:49.478125    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:51.676201    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:55:51.676201    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:51.676869    8968 provision.go:143] copyHostCerts
	I0910 19:55:51.676948    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0910 19:55:51.677197    8968 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0910 19:55:51.677197    8968 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0910 19:55:51.677515    8968 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0910 19:55:51.678350    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0910 19:55:51.678475    8968 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0910 19:55:51.678475    8968 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0910 19:55:51.678475    8968 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0910 19:55:51.679245    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0910 19:55:51.679245    8968 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0910 19:55:51.679778    8968 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0910 19:55:51.680005    8968 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0910 19:55:51.680594    8968 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-629100 san=[127.0.0.1 172.31.215.172 localhost minikube multinode-629100]
	I0910 19:55:51.940690    8968 provision.go:177] copyRemoteCerts
	I0910 19:55:51.949934    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 19:55:51.949934    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:53.768315    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:53.768315    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:53.768865    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:55.948738    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:55:55.948738    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:55.949913    8968 sshutil.go:53] new ssh client: &{IP:172.31.215.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 19:55:56.060981    8968 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1107756s)
	I0910 19:55:56.061083    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0910 19:55:56.061204    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 19:55:56.102074    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0910 19:55:56.102162    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0910 19:55:56.141093    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0910 19:55:56.142048    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 19:55:56.186775    8968 provision.go:87] duration metric: took 12.5189453s to configureAuth
	I0910 19:55:56.186909    8968 buildroot.go:189] setting minikube options for container-runtime
	I0910 19:55:56.187719    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:55:56.187871    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:58.006146    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:58.006146    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:58.006457    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:56:00.204140    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:56:00.204140    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:00.208874    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:56:00.209038    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.172 22 <nil> <nil>}
	I0910 19:56:00.209038    8968 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 19:56:00.343982    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 19:56:00.344029    8968 buildroot.go:70] root file system type: tmpfs
	I0910 19:56:00.344103    8968 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 19:56:00.344103    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:56:02.241359    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:56:02.241898    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:02.241898    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:56:04.484048    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:56:04.484048    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:04.488459    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:56:04.488939    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.172 22 <nil> <nil>}
	I0910 19:56:04.488939    8968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 19:56:04.647131    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 19:56:04.647238    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:56:06.532276    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:56:06.532276    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:06.532276    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:56:08.757503    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:56:08.757503    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:08.762888    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:56:08.763675    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.172 22 <nil> <nil>}
	I0910 19:56:08.763745    8968 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 19:56:11.168441    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 19:56:11.168550    8968 machine.go:96] duration metric: took 39.9599043s to provisionDockerMachine
	I0910 19:56:11.168600    8968 start.go:293] postStartSetup for "multinode-629100" (driver="hyperv")
	I0910 19:56:11.168600    8968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 19:56:11.176637    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 19:56:11.176637    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:56:13.016876    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:56:13.016876    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:13.017587    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:56:15.249047    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:56:15.249047    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:15.250101    8968 sshutil.go:53] new ssh client: &{IP:172.31.215.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 19:56:15.350483    8968 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.1735704s)
	I0910 19:56:15.358673    8968 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 19:56:15.365836    8968 command_runner.go:130] > NAME=Buildroot
	I0910 19:56:15.365943    8968 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0910 19:56:15.365943    8968 command_runner.go:130] > ID=buildroot
	I0910 19:56:15.365943    8968 command_runner.go:130] > VERSION_ID=2023.02.9
	I0910 19:56:15.365943    8968 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0910 19:56:15.366076    8968 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 19:56:15.366140    8968 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0910 19:56:15.366465    8968 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0910 19:56:15.366897    8968 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> 47242.pem in /etc/ssl/certs
	I0910 19:56:15.366897    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /etc/ssl/certs/47242.pem
	I0910 19:56:15.375577    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 19:56:15.392551    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /etc/ssl/certs/47242.pem (1708 bytes)
	I0910 19:56:15.445284    8968 start.go:296] duration metric: took 4.2764022s for postStartSetup
	I0910 19:56:15.445284    8968 fix.go:56] duration metric: took 1m18.6370603s for fixHost
	I0910 19:56:15.445284    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:56:17.260547    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:56:17.260547    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:17.261046    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:56:19.525121    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:56:19.525121    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:19.529821    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:56:19.530372    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.172 22 <nil> <nil>}
	I0910 19:56:19.530455    8968 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 19:56:19.657935    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725998179.875545666
	
	I0910 19:56:19.657935    8968 fix.go:216] guest clock: 1725998179.875545666
	I0910 19:56:19.657935    8968 fix.go:229] Guest: 2024-09-10 19:56:19.875545666 +0000 UTC Remote: 2024-09-10 19:56:15.4452848 +0000 UTC m=+84.164019901 (delta=4.430260866s)
	I0910 19:56:19.658021    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:56:21.569597    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:56:21.569597    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:21.570477    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:56:23.827065    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:56:23.827065    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:23.832337    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:56:23.833025    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.172 22 <nil> <nil>}
	I0910 19:56:23.833025    8968 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1725998179
	I0910 19:56:23.966594    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Sep 10 19:56:19 UTC 2024
	
	I0910 19:56:23.966674    8968 fix.go:236] clock set: Tue Sep 10 19:56:19 UTC 2024
	 (err=<nil>)
	I0910 19:56:23.966674    8968 start.go:83] releasing machines lock for "multinode-629100", held for 1m27.1581778s
	I0910 19:56:23.966940    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:56:25.805789    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:56:25.805789    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:25.806273    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:56:28.057139    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:56:28.057139    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:28.061931    8968 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0910 19:56:28.062021    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:56:28.070593    8968 ssh_runner.go:195] Run: cat /version.json
	I0910 19:56:28.070593    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:56:29.995916    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:56:29.995916    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:30.001879    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:56:30.006636    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:56:30.006636    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:30.006636    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:56:32.328310    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:56:32.329469    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:32.329546    8968 sshutil.go:53] new ssh client: &{IP:172.31.215.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 19:56:32.351074    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:56:32.351074    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:32.351074    8968 sshutil.go:53] new ssh client: &{IP:172.31.215.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 19:56:32.422719    8968 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0910 19:56:32.422799    8968 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.3605809s)
	W0910 19:56:32.422920    8968 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0910 19:56:32.439045    8968 command_runner.go:130] > {"iso_version": "v1.34.0-1725912912-19598", "kicbase_version": "v0.0.45", "minikube_version": "v1.34.0", "commit": "a47e98bacf93197560d0f08408949de0434951d5"}
	I0910 19:56:32.439045    8968 ssh_runner.go:235] Completed: cat /version.json: (4.3681648s)
	I0910 19:56:32.449121    8968 ssh_runner.go:195] Run: systemctl --version
	I0910 19:56:32.458656    8968 command_runner.go:130] > systemd 252 (252)
	I0910 19:56:32.458656    8968 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0910 19:56:32.467038    8968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0910 19:56:32.474641    8968 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0910 19:56:32.475633    8968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 19:56:32.484219    8968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 19:56:32.509754    8968 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0910 19:56:32.510072    8968 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 19:56:32.510072    8968 start.go:495] detecting cgroup driver to use...
	I0910 19:56:32.510345    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 19:56:32.551712    8968 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	W0910 19:56:32.561610    8968 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0910 19:56:32.561717    8968 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0910 19:56:32.564699    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 19:56:32.594495    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 19:56:32.612335    8968 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 19:56:32.625083    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 19:56:32.650676    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 19:56:32.676749    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 19:56:32.703464    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 19:56:32.730334    8968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 19:56:32.759064    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 19:56:32.784719    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 19:56:32.813565    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 19:56:32.839376    8968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 19:56:32.861766    8968 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0910 19:56:32.872274    8968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 19:56:32.901215    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:56:33.068127    8968 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 19:56:33.095453    8968 start.go:495] detecting cgroup driver to use...
	I0910 19:56:33.104067    8968 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 19:56:33.126431    8968 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0910 19:56:33.126462    8968 command_runner.go:130] > [Unit]
	I0910 19:56:33.126462    8968 command_runner.go:130] > Description=Docker Application Container Engine
	I0910 19:56:33.126462    8968 command_runner.go:130] > Documentation=https://docs.docker.com
	I0910 19:56:33.126462    8968 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0910 19:56:33.126572    8968 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0910 19:56:33.126572    8968 command_runner.go:130] > StartLimitBurst=3
	I0910 19:56:33.126610    8968 command_runner.go:130] > StartLimitIntervalSec=60
	I0910 19:56:33.126610    8968 command_runner.go:130] > [Service]
	I0910 19:56:33.126610    8968 command_runner.go:130] > Type=notify
	I0910 19:56:33.126658    8968 command_runner.go:130] > Restart=on-failure
	I0910 19:56:33.126697    8968 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0910 19:56:33.126697    8968 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0910 19:56:33.126743    8968 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0910 19:56:33.126782    8968 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0910 19:56:33.126782    8968 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0910 19:56:33.126823    8968 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0910 19:56:33.126861    8968 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0910 19:56:33.126861    8968 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0910 19:56:33.126909    8968 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0910 19:56:33.126947    8968 command_runner.go:130] > ExecStart=
	I0910 19:56:33.126993    8968 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0910 19:56:33.127031    8968 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0910 19:56:33.127078    8968 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0910 19:56:33.127116    8968 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0910 19:56:33.127116    8968 command_runner.go:130] > LimitNOFILE=infinity
	I0910 19:56:33.127163    8968 command_runner.go:130] > LimitNPROC=infinity
	I0910 19:56:33.127163    8968 command_runner.go:130] > LimitCORE=infinity
	I0910 19:56:33.127202    8968 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0910 19:56:33.127202    8968 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0910 19:56:33.127247    8968 command_runner.go:130] > TasksMax=infinity
	I0910 19:56:33.127247    8968 command_runner.go:130] > TimeoutStartSec=0
	I0910 19:56:33.127287    8968 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0910 19:56:33.127287    8968 command_runner.go:130] > Delegate=yes
	I0910 19:56:33.127332    8968 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0910 19:56:33.127332    8968 command_runner.go:130] > KillMode=process
	I0910 19:56:33.127371    8968 command_runner.go:130] > [Install]
	I0910 19:56:33.127371    8968 command_runner.go:130] > WantedBy=multi-user.target
	I0910 19:56:33.136772    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:56:33.167671    8968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 19:56:33.198610    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:56:33.229167    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 19:56:33.259175    8968 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 19:56:33.316352    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 19:56:33.338341    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 19:56:33.368711    8968 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0910 19:56:33.377603    8968 ssh_runner.go:195] Run: which cri-dockerd
	I0910 19:56:33.382637    8968 command_runner.go:130] > /usr/bin/cri-dockerd
	I0910 19:56:33.390594    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 19:56:33.406669    8968 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 19:56:33.441617    8968 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 19:56:33.628107    8968 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 19:56:33.791205    8968 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 19:56:33.791377    8968 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 19:56:33.835239    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:56:34.010288    8968 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 19:56:36.641133    8968 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.630672s)
	I0910 19:56:36.649244    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 19:56:36.679771    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 19:56:36.709773    8968 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 19:56:36.884369    8968 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 19:56:37.045145    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:56:37.213939    8968 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 19:56:37.250747    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 19:56:37.284206    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:56:37.450657    8968 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 19:56:37.542232    8968 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 19:56:37.556832    8968 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 19:56:37.564808    8968 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0910 19:56:37.564808    8968 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0910 19:56:37.564808    8968 command_runner.go:130] > Device: 0,22	Inode: 846         Links: 1
	I0910 19:56:37.564808    8968 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0910 19:56:37.564808    8968 command_runner.go:130] > Access: 2024-09-10 19:56:37.695183981 +0000
	I0910 19:56:37.564808    8968 command_runner.go:130] > Modify: 2024-09-10 19:56:37.695183981 +0000
	I0910 19:56:37.564808    8968 command_runner.go:130] > Change: 2024-09-10 19:56:37.700184717 +0000
	I0910 19:56:37.564808    8968 command_runner.go:130] >  Birth: -
	I0910 19:56:37.564808    8968 start.go:563] Will wait 60s for crictl version
	I0910 19:56:37.572799    8968 ssh_runner.go:195] Run: which crictl
	I0910 19:56:37.578543    8968 command_runner.go:130] > /usr/bin/crictl
	I0910 19:56:37.586389    8968 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 19:56:37.633980    8968 command_runner.go:130] > Version:  0.1.0
	I0910 19:56:37.633980    8968 command_runner.go:130] > RuntimeName:  docker
	I0910 19:56:37.633980    8968 command_runner.go:130] > RuntimeVersion:  27.2.0
	I0910 19:56:37.633980    8968 command_runner.go:130] > RuntimeApiVersion:  v1
	I0910 19:56:37.633980    8968 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0910 19:56:37.641240    8968 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 19:56:37.672602    8968 command_runner.go:130] > 27.2.0
	I0910 19:56:37.681777    8968 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 19:56:37.708014    8968 command_runner.go:130] > 27.2.0
	I0910 19:56:37.711076    8968 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0910 19:56:37.711398    8968 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0910 19:56:37.718180    8968 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0910 19:56:37.718247    8968 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0910 19:56:37.718247    8968 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0910 19:56:37.718247    8968 ip.go:211] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:36:e6 Flags:up|broadcast|multicast|running}
	I0910 19:56:37.720538    8968 ip.go:214] interface addr: fe80::bc85:cd58:cadb:2533/64
	I0910 19:56:37.720538    8968 ip.go:214] interface addr: 172.31.208.1/20
	I0910 19:56:37.728892    8968 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0910 19:56:37.734642    8968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:56:37.753261    8968 kubeadm.go:883] updating cluster {Name:multinode-629100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.215.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.209.0 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.210.110 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 19:56:37.753780    8968 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 19:56:37.761890    8968 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 19:56:37.788312    8968 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0910 19:56:37.788411    8968 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 19:56:37.788411    8968 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.0
	I0910 19:56:37.788411    8968 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.0
	I0910 19:56:37.788411    8968 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.0
	I0910 19:56:37.788479    8968 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0910 19:56:37.788479    8968 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0910 19:56:37.788479    8968 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0910 19:56:37.788479    8968 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 19:56:37.788479    8968 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0910 19:56:37.788479    8968 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0910 19:56:37.788479    8968 docker.go:615] Images already preloaded, skipping extraction
	I0910 19:56:37.795589    8968 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 19:56:37.816540    8968 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0910 19:56:37.816540    8968 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.0
	I0910 19:56:37.816540    8968 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 19:56:37.816540    8968 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.0
	I0910 19:56:37.816540    8968 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.0
	I0910 19:56:37.816540    8968 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0910 19:56:37.816540    8968 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0910 19:56:37.816540    8968 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0910 19:56:37.816540    8968 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 19:56:37.816540    8968 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0910 19:56:37.816540    8968 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0910 19:56:37.816540    8968 cache_images.go:84] Images are preloaded, skipping loading
	I0910 19:56:37.816540    8968 kubeadm.go:934] updating node { 172.31.215.172 8443 v1.31.0 docker true true} ...
	I0910 19:56:37.817541    8968 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-629100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.215.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 19:56:37.824538    8968 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0910 19:56:37.888144    8968 command_runner.go:130] > cgroupfs
	I0910 19:56:37.888144    8968 cni.go:84] Creating CNI manager for ""
	I0910 19:56:37.888144    8968 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0910 19:56:37.888144    8968 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 19:56:37.888144    8968 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.31.215.172 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-629100 NodeName:multinode-629100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.31.215.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.31.215.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 19:56:37.888144    8968 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.31.215.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-629100"
	  kubeletExtraArgs:
	    node-ip: 172.31.215.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.31.215.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 19:56:37.896670    8968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 19:56:37.913680    8968 command_runner.go:130] > kubeadm
	I0910 19:56:37.913739    8968 command_runner.go:130] > kubectl
	I0910 19:56:37.913739    8968 command_runner.go:130] > kubelet
	I0910 19:56:37.913739    8968 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 19:56:37.925790    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 19:56:37.940264    8968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0910 19:56:37.968391    8968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 19:56:37.996814    8968 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0910 19:56:38.039321    8968 ssh_runner.go:195] Run: grep 172.31.215.172	control-plane.minikube.internal$ /etc/hosts
	I0910 19:56:38.045004    8968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.215.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:56:38.075801    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:56:38.234804    8968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:56:38.261215    8968 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100 for IP: 172.31.215.172
	I0910 19:56:38.261289    8968 certs.go:194] generating shared ca certs ...
	I0910 19:56:38.261289    8968 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:56:38.262036    8968 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0910 19:56:38.262175    8968 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0910 19:56:38.262523    8968 certs.go:256] generating profile certs ...
	I0910 19:56:38.263476    8968 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\client.key
	I0910 19:56:38.263704    8968 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.key.19ae5440
	I0910 19:56:38.263899    8968 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.crt.19ae5440 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.215.172]
	I0910 19:56:38.352070    8968 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.crt.19ae5440 ...
	I0910 19:56:38.353069    8968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.crt.19ae5440: {Name:mka4aa739e1e31d272e3a0c83d71990004ea368f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:56:38.353318    8968 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.key.19ae5440 ...
	I0910 19:56:38.353318    8968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.key.19ae5440: {Name:mk0f0b1f0e62f4cc00cc755cf935f1f4f74aa76a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:56:38.354397    8968 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.crt.19ae5440 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.crt
	I0910 19:56:38.368314    8968 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.key.19ae5440 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.key
	I0910 19:56:38.370576    8968 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\proxy-client.key
	I0910 19:56:38.370576    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 19:56:38.370912    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0910 19:56:38.370912    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 19:56:38.370912    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 19:56:38.371455    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 19:56:38.371708    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 19:56:38.371972    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 19:56:38.372703    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 19:56:38.373212    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem (1338 bytes)
	W0910 19:56:38.373381    8968 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724_empty.pem, impossibly tiny 0 bytes
	I0910 19:56:38.373740    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0910 19:56:38.373997    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0910 19:56:38.374190    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0910 19:56:38.374190    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0910 19:56:38.374859    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem (1708 bytes)
	I0910 19:56:38.374859    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:56:38.374859    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem -> /usr/share/ca-certificates/4724.pem
	I0910 19:56:38.374859    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /usr/share/ca-certificates/47242.pem
	I0910 19:56:38.376241    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 19:56:38.421258    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0910 19:56:38.464846    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 19:56:38.501694    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 19:56:38.546382    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0910 19:56:38.586301    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 19:56:38.625231    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 19:56:38.665770    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 19:56:38.704754    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 19:56:38.743619    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem --> /usr/share/ca-certificates/4724.pem (1338 bytes)
	I0910 19:56:38.783193    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /usr/share/ca-certificates/47242.pem (1708 bytes)
	I0910 19:56:38.821090    8968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 19:56:38.856080    8968 ssh_runner.go:195] Run: openssl version
	I0910 19:56:38.865125    8968 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0910 19:56:38.877220    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 19:56:38.903309    8968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:56:38.908944    8968 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:56:38.909848    8968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:56:38.917705    8968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:56:38.925186    8968 command_runner.go:130] > b5213941
	I0910 19:56:38.934489    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 19:56:38.957691    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4724.pem && ln -fs /usr/share/ca-certificates/4724.pem /etc/ssl/certs/4724.pem"
	I0910 19:56:38.986436    8968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4724.pem
	I0910 19:56:38.992640    8968 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 19:56:38.992640    8968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 19:56:39.003182    8968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4724.pem
	I0910 19:56:39.011660    8968 command_runner.go:130] > 51391683
	I0910 19:56:39.019826    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4724.pem /etc/ssl/certs/51391683.0"
	I0910 19:56:39.044940    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47242.pem && ln -fs /usr/share/ca-certificates/47242.pem /etc/ssl/certs/47242.pem"
	I0910 19:56:39.074742    8968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47242.pem
	I0910 19:56:39.081680    8968 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 19:56:39.081680    8968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 19:56:39.092804    8968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47242.pem
	I0910 19:56:39.100846    8968 command_runner.go:130] > 3ec20f2e
	I0910 19:56:39.109625    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47242.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 19:56:39.136173    8968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 19:56:39.143744    8968 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 19:56:39.143744    8968 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0910 19:56:39.143744    8968 command_runner.go:130] > Device: 8,1	Inode: 5242685     Links: 1
	I0910 19:56:39.143744    8968 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0910 19:56:39.143744    8968 command_runner.go:130] > Access: 2024-09-10 19:35:28.653571928 +0000
	I0910 19:56:39.143744    8968 command_runner.go:130] > Modify: 2024-09-10 19:35:28.653571928 +0000
	I0910 19:56:39.143744    8968 command_runner.go:130] > Change: 2024-09-10 19:35:28.653571928 +0000
	I0910 19:56:39.143863    8968 command_runner.go:130] >  Birth: 2024-09-10 19:35:28.653571928 +0000
	I0910 19:56:39.151948    8968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 19:56:39.161535    8968 command_runner.go:130] > Certificate will not expire
	I0910 19:56:39.171433    8968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 19:56:39.180962    8968 command_runner.go:130] > Certificate will not expire
	I0910 19:56:39.190125    8968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 19:56:39.200301    8968 command_runner.go:130] > Certificate will not expire
	I0910 19:56:39.207469    8968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 19:56:39.216732    8968 command_runner.go:130] > Certificate will not expire
	I0910 19:56:39.225321    8968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 19:56:39.234561    8968 command_runner.go:130] > Certificate will not expire
	I0910 19:56:39.242315    8968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 19:56:39.251876    8968 command_runner.go:130] > Certificate will not expire
	I0910 19:56:39.252160    8968 kubeadm.go:392] StartCluster: {Name:multinode-629100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.215.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.209.0 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.210.110 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:56:39.258172    8968 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 19:56:39.290840    8968 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 19:56:39.312255    8968 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0910 19:56:39.312255    8968 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0910 19:56:39.312255    8968 command_runner.go:130] > /var/lib/minikube/etcd:
	I0910 19:56:39.312255    8968 command_runner.go:130] > member
	I0910 19:56:39.312848    8968 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 19:56:39.312891    8968 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 19:56:39.323278    8968 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 19:56:39.338626    8968 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 19:56:39.339755    8968 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-629100" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 19:56:39.340650    8968 kubeconfig.go:62] C:\Users\jenkins.minikube5\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-629100" cluster setting kubeconfig missing "multinode-629100" context setting]
	I0910 19:56:39.341091    8968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:56:39.356996    8968 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 19:56:39.357446    8968 kapi.go:59] client config for multinode-629100: &rest.Config{Host:"https://172.31.215.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100/client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100/client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 19:56:39.358433    8968 cert_rotation.go:140] Starting client certificate rotation controller
	I0910 19:56:39.366456    8968 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 19:56:39.382610    8968 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0910 19:56:39.382610    8968 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0910 19:56:39.382696    8968 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0910 19:56:39.382696    8968 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0910 19:56:39.382696    8968 command_runner.go:130] >  kind: InitConfiguration
	I0910 19:56:39.382696    8968 command_runner.go:130] >  localAPIEndpoint:
	I0910 19:56:39.382696    8968 command_runner.go:130] > -  advertiseAddress: 172.31.210.71
	I0910 19:56:39.382696    8968 command_runner.go:130] > +  advertiseAddress: 172.31.215.172
	I0910 19:56:39.382696    8968 command_runner.go:130] >    bindPort: 8443
	I0910 19:56:39.382696    8968 command_runner.go:130] >  bootstrapTokens:
	I0910 19:56:39.382696    8968 command_runner.go:130] >    - groups:
	I0910 19:56:39.382696    8968 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0910 19:56:39.382769    8968 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0910 19:56:39.382769    8968 command_runner.go:130] >    name: "multinode-629100"
	I0910 19:56:39.382769    8968 command_runner.go:130] >    kubeletExtraArgs:
	I0910 19:56:39.382769    8968 command_runner.go:130] > -    node-ip: 172.31.210.71
	I0910 19:56:39.382769    8968 command_runner.go:130] > +    node-ip: 172.31.215.172
	I0910 19:56:39.382769    8968 command_runner.go:130] >    taints: []
	I0910 19:56:39.382769    8968 command_runner.go:130] >  ---
	I0910 19:56:39.382769    8968 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0910 19:56:39.382769    8968 command_runner.go:130] >  kind: ClusterConfiguration
	I0910 19:56:39.382875    8968 command_runner.go:130] >  apiServer:
	I0910 19:56:39.382875    8968 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.31.210.71"]
	I0910 19:56:39.382875    8968 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.31.215.172"]
	I0910 19:56:39.382875    8968 command_runner.go:130] >    extraArgs:
	I0910 19:56:39.382955    8968 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0910 19:56:39.382955    8968 command_runner.go:130] >  controllerManager:
	I0910 19:56:39.383034    8968 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.31.210.71
	+  advertiseAddress: 172.31.215.172
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-629100"
	   kubeletExtraArgs:
	-    node-ip: 172.31.210.71
	+    node-ip: 172.31.215.172
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.31.210.71"]
	+  certSANs: ["127.0.0.1", "localhost", "172.31.215.172"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0910 19:56:39.383034    8968 kubeadm.go:1160] stopping kube-system containers ...
	I0910 19:56:39.388935    8968 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 19:56:39.417916    8968 command_runner.go:130] > 039fd49f157a
	I0910 19:56:39.417916    8968 command_runner.go:130] > 35f4bfd5434b
	I0910 19:56:39.417916    8968 command_runner.go:130] > 3a4b56ccc379
	I0910 19:56:39.417916    8968 command_runner.go:130] > bf116f91589f
	I0910 19:56:39.417916    8968 command_runner.go:130] > 33f88ed7aee2
	I0910 19:56:39.417916    8968 command_runner.go:130] > 85b03f498671
	I0910 19:56:39.417916    8968 command_runner.go:130] > 1d92603202b0
	I0910 19:56:39.417916    8968 command_runner.go:130] > 4e550827f00f
	I0910 19:56:39.417916    8968 command_runner.go:130] > 76702d5d897e
	I0910 19:56:39.417916    8968 command_runner.go:130] > ea8d0b0af86d
	I0910 19:56:39.417916    8968 command_runner.go:130] > 5cb559fed2d8
	I0910 19:56:39.417916    8968 command_runner.go:130] > ea7220d439d1
	I0910 19:56:39.417916    8968 command_runner.go:130] > d3ab7c79ce4b
	I0910 19:56:39.417916    8968 command_runner.go:130] > db7037ca07a4
	I0910 19:56:39.417916    8968 command_runner.go:130] > 49d9c6949234
	I0910 19:56:39.417916    8968 command_runner.go:130] > f85e1c01f68d
	I0910 19:56:39.423100    8968 docker.go:483] Stopping containers: [039fd49f157a 35f4bfd5434b 3a4b56ccc379 bf116f91589f 33f88ed7aee2 85b03f498671 1d92603202b0 4e550827f00f 76702d5d897e ea8d0b0af86d 5cb559fed2d8 ea7220d439d1 d3ab7c79ce4b db7037ca07a4 49d9c6949234 f85e1c01f68d]
	I0910 19:56:39.429457    8968 ssh_runner.go:195] Run: docker stop 039fd49f157a 35f4bfd5434b 3a4b56ccc379 bf116f91589f 33f88ed7aee2 85b03f498671 1d92603202b0 4e550827f00f 76702d5d897e ea8d0b0af86d 5cb559fed2d8 ea7220d439d1 d3ab7c79ce4b db7037ca07a4 49d9c6949234 f85e1c01f68d
	I0910 19:56:39.452206    8968 command_runner.go:130] > 039fd49f157a
	I0910 19:56:39.452206    8968 command_runner.go:130] > 35f4bfd5434b
	I0910 19:56:39.452206    8968 command_runner.go:130] > 3a4b56ccc379
	I0910 19:56:39.452206    8968 command_runner.go:130] > bf116f91589f
	I0910 19:56:39.452206    8968 command_runner.go:130] > 33f88ed7aee2
	I0910 19:56:39.452206    8968 command_runner.go:130] > 85b03f498671
	I0910 19:56:39.452206    8968 command_runner.go:130] > 1d92603202b0
	I0910 19:56:39.453280    8968 command_runner.go:130] > 4e550827f00f
	I0910 19:56:39.453280    8968 command_runner.go:130] > 76702d5d897e
	I0910 19:56:39.453280    8968 command_runner.go:130] > ea8d0b0af86d
	I0910 19:56:39.453362    8968 command_runner.go:130] > 5cb559fed2d8
	I0910 19:56:39.453382    8968 command_runner.go:130] > ea7220d439d1
	I0910 19:56:39.453382    8968 command_runner.go:130] > d3ab7c79ce4b
	I0910 19:56:39.453382    8968 command_runner.go:130] > db7037ca07a4
	I0910 19:56:39.453382    8968 command_runner.go:130] > 49d9c6949234
	I0910 19:56:39.453382    8968 command_runner.go:130] > f85e1c01f68d
	I0910 19:56:39.466190    8968 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 19:56:39.501405    8968 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:56:39.517879    8968 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0910 19:56:39.517952    8968 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0910 19:56:39.517952    8968 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0910 19:56:39.517952    8968 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:56:39.518698    8968 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:56:39.518780    8968 kubeadm.go:157] found existing configuration files:
	
	I0910 19:56:39.528917    8968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:56:39.545046    8968 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:56:39.545046    8968 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:56:39.558329    8968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:56:39.587248    8968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:56:39.604180    8968 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:56:39.604357    8968 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:56:39.612388    8968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:56:39.640357    8968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:56:39.656666    8968 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:56:39.656666    8968 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:56:39.668563    8968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:56:39.697326    8968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:56:39.713909    8968 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:56:39.714260    8968 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:56:39.724205    8968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:56:39.748711    8968 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:56:39.764102    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:56:39.972487    8968 command_runner.go:130] ! W0910 19:56:40.198008    1589 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:39.972818    8968 command_runner.go:130] ! W0910 19:56:40.199067    1589 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using the existing "sa" key
	I0910 19:56:39.983160    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:56:40.040403    8968 command_runner.go:130] ! W0910 19:56:40.266499    1594 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:40.041482    8968 command_runner.go:130] ! W0910 19:56:40.267380    1594 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:40.940268    8968 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:56:40.940268    8968 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:56:40.940268    8968 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 19:56:40.940268    8968 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:56:40.940268    8968 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:56:40.940268    8968 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:56:40.941310    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:56:41.001988    8968 command_runner.go:130] ! W0910 19:56:41.227831    1599 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:41.002223    8968 command_runner.go:130] ! W0910 19:56:41.228835    1599 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:41.216732    8968 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:56:41.216732    8968 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:56:41.216732    8968 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0910 19:56:41.216991    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:56:41.275754    8968 command_runner.go:130] ! W0910 19:56:41.501519    1627 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:41.276450    8968 command_runner.go:130] ! W0910 19:56:41.502556    1627 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:41.288826    8968 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:56:41.288889    8968 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:56:41.288889    8968 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:56:41.288967    8968 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:56:41.289052    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:56:41.360679    8968 command_runner.go:130] ! W0910 19:56:41.586136    1632 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:41.361321    8968 command_runner.go:130] ! W0910 19:56:41.587528    1632 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:41.372319    8968 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:56:41.373305    8968 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:56:41.381296    8968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:56:41.886855    8968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:56:42.391668    8968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:56:42.887719    8968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:56:43.396225    8968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:56:43.419238    8968 command_runner.go:130] > 1954
	I0910 19:56:43.420241    8968 api_server.go:72] duration metric: took 2.0466435s to wait for apiserver process to appear ...
	I0910 19:56:43.420241    8968 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:56:43.420316    8968 api_server.go:253] Checking apiserver healthz at https://172.31.215.172:8443/healthz ...
	I0910 19:56:46.157624    8968 api_server.go:279] https://172.31.215.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 19:56:46.157624    8968 api_server.go:103] status: https://172.31.215.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 19:56:46.157624    8968 api_server.go:253] Checking apiserver healthz at https://172.31.215.172:8443/healthz ...
	I0910 19:56:46.230611    8968 api_server.go:279] https://172.31.215.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 19:56:46.230611    8968 api_server.go:103] status: https://172.31.215.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 19:56:46.433226    8968 api_server.go:253] Checking apiserver healthz at https://172.31.215.172:8443/healthz ...
	I0910 19:56:46.441472    8968 api_server.go:279] https://172.31.215.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 19:56:46.441960    8968 api_server.go:103] status: https://172.31.215.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 19:56:46.925693    8968 api_server.go:253] Checking apiserver healthz at https://172.31.215.172:8443/healthz ...
	I0910 19:56:46.932716    8968 api_server.go:279] https://172.31.215.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 19:56:46.933511    8968 api_server.go:103] status: https://172.31.215.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 19:56:47.435691    8968 api_server.go:253] Checking apiserver healthz at https://172.31.215.172:8443/healthz ...
	I0910 19:56:47.444457    8968 api_server.go:279] https://172.31.215.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 19:56:47.444539    8968 api_server.go:103] status: https://172.31.215.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 19:56:47.929837    8968 api_server.go:253] Checking apiserver healthz at https://172.31.215.172:8443/healthz ...
	I0910 19:56:47.937304    8968 api_server.go:279] https://172.31.215.172:8443/healthz returned 200:
	ok
	I0910 19:56:47.937400    8968 round_trippers.go:463] GET https://172.31.215.172:8443/version
	I0910 19:56:47.937400    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:47.937400    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:47.937400    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:47.946102    8968 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 19:56:47.946102    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:47.946102    8968 round_trippers.go:580]     Audit-Id: 713a18e9-dfc4-4639-b141-13fdcfdd6f42
	I0910 19:56:47.946102    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:47.946102    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:47.946102    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:47.946102    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:47.946102    8968 round_trippers.go:580]     Content-Length: 263
	I0910 19:56:47.946102    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:48 GMT
	I0910 19:56:47.946102    8968 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.0",
	  "gitCommit": "9edcffcde5595e8a5b1a35f88c421764e575afce",
	  "gitTreeState": "clean",
	  "buildDate": "2024-08-13T07:28:49Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0910 19:56:47.947107    8968 api_server.go:141] control plane version: v1.31.0
	I0910 19:56:47.947107    8968 api_server.go:131] duration metric: took 4.5265683s to wait for apiserver health ...
	I0910 19:56:47.947107    8968 cni.go:84] Creating CNI manager for ""
	I0910 19:56:47.947107    8968 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0910 19:56:47.949109    8968 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0910 19:56:47.960086    8968 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0910 19:56:47.968545    8968 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0910 19:56:47.968619    8968 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0910 19:56:47.968619    8968 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0910 19:56:47.968619    8968 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0910 19:56:47.968619    8968 command_runner.go:130] > Access: 2024-09-10 19:55:27.060111908 +0000
	I0910 19:56:47.968679    8968 command_runner.go:130] > Modify: 2024-09-10 02:48:06.000000000 +0000
	I0910 19:56:47.968704    8968 command_runner.go:130] > Change: 2024-09-10 19:55:15.625000000 +0000
	I0910 19:56:47.968704    8968 command_runner.go:130] >  Birth: -
	I0910 19:56:47.969691    8968 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0910 19:56:47.969748    8968 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0910 19:56:48.004245    8968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0910 19:56:48.828867    8968 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0910 19:56:48.828961    8968 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0910 19:56:48.828961    8968 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0910 19:56:48.828961    8968 command_runner.go:130] > daemonset.apps/kindnet configured
	I0910 19:56:48.828961    8968 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:56:48.828961    8968 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0910 19:56:48.828961    8968 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0910 19:56:48.828961    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods
	I0910 19:56:48.829485    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:48.829485    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:48.829553    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:48.834199    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:56:48.834238    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:48.834238    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:48.834238    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:48.834238    8968 round_trippers.go:580]     Audit-Id: 5bde7628-1930-4ae6-8932-b38e67eee1fd
	I0910 19:56:48.834238    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:48.834238    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:48.834238    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:48.836465    8968 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1665"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90770 chars]
	I0910 19:56:48.844126    8968 system_pods.go:59] 12 kube-system pods found
	I0910 19:56:48.844126    8968 system_pods.go:61] "coredns-6f6b679f8f-srtv8" [76dd899a-75f4-497d-a6a9-6b263f3a379d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 19:56:48.844126    8968 system_pods.go:61] "etcd-multinode-629100" [2e6b0a6d-5705-427c-89e2-ff2b3ca25cb6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 19:56:48.844126    8968 system_pods.go:61] "kindnet-5crht" [d569a3a6-5b06-4adf-9ac0-294274923906] Running
	I0910 19:56:48.844126    8968 system_pods.go:61] "kindnet-6tdpv" [2c45f0f2-5d24-4ec2-8e6b-06923ea85e78] Running
	I0910 19:56:48.844126    8968 system_pods.go:61] "kindnet-lj2v2" [6643175a-32c5-4461-a441-08b9ac9ba98a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0910 19:56:48.844126    8968 system_pods.go:61] "kube-apiserver-multinode-629100" [5d262302-6bdf-4e67-bf9f-4b50d8f9fbd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 19:56:48.844126    8968 system_pods.go:61] "kube-controller-manager-multinode-629100" [60adcb0c-808a-477c-9432-83dc8f96d6c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 19:56:48.844126    8968 system_pods.go:61] "kube-proxy-4tzx6" [9bb18c28-3ee9-4028-a61d-3d7f6ea31894] Running
	I0910 19:56:48.844126    8968 system_pods.go:61] "kube-proxy-qqrrg" [1fc7fdda-d5e4-4c72-96c1-2348eb72b491] Running
	I0910 19:56:48.844126    8968 system_pods.go:61] "kube-proxy-wqf2d" [27e7846e-e506-48e0-96b9-351b3ca91703] Running
	I0910 19:56:48.844126    8968 system_pods.go:61] "kube-scheduler-multinode-629100" [fe93d1a3-98c4-44c9-b1fb-9d90a97b97c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 19:56:48.844126    8968 system_pods.go:61] "storage-provisioner" [511e6e52-fc2a-4562-9903-42ee1f2e0a2d] Running
	I0910 19:56:48.844126    8968 system_pods.go:74] duration metric: took 15.1645ms to wait for pod list to return data ...
	I0910 19:56:48.844126    8968 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:56:48.844126    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes
	I0910 19:56:48.844126    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:48.844126    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:48.844126    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:48.850348    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:56:48.850348    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:48.850348    8968 round_trippers.go:580]     Audit-Id: 5681b103-bfc7-4d17-b96c-df41bc8d3fc4
	I0910 19:56:48.850348    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:48.850348    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:48.850348    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:48.850779    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:48.850779    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:48.851108    8968 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1665"},"items":[{"metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1640","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15609 chars]
	I0910 19:56:48.852347    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:56:48.852416    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 19:56:48.852416    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:56:48.852416    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 19:56:48.852416    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:56:48.852416    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 19:56:48.852416    8968 node_conditions.go:105] duration metric: took 8.2895ms to run NodePressure ...
	I0910 19:56:48.852486    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:56:48.912312    8968 command_runner.go:130] ! W0910 19:56:49.138787    2436 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:48.913066    8968 command_runner.go:130] ! W0910 19:56:49.139755    2436 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:49.159277    8968 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0910 19:56:49.159277    8968 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0910 19:56:49.159277    8968 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0910 19:56:49.159660    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0910 19:56:49.159660    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.159660    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.159660    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.168638    8968 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 19:56:49.168638    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.169647    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.169647    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.169647    8968 round_trippers.go:580]     Audit-Id: b4ed1f29-dcfe-485b-b5f4-0129d38348cf
	I0910 19:56:49.169647    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.169647    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.169647    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.169647    8968 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1667"},"items":[{"metadata":{"name":"etcd-multinode-629100","namespace":"kube-system","uid":"2e6b0a6d-5705-427c-89e2-ff2b3ca25cb6","resourceVersion":"1661","creationTimestamp":"2024-09-10T19:56:47Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.215.172:2379","kubernetes.io/config.hash":"fe5e551808132ccc0e55277539c9741f","kubernetes.io/config.mirror":"fe5e551808132ccc0e55277539c9741f","kubernetes.io/config.seen":"2024-09-10T19:56:41.600229288Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:56:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 31351 chars]
	I0910 19:56:49.170644    8968 kubeadm.go:739] kubelet initialised
	I0910 19:56:49.170644    8968 kubeadm.go:740] duration metric: took 11.3659ms waiting for restarted kubelet to initialise ...
	I0910 19:56:49.170644    8968 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:56:49.170644    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods
	I0910 19:56:49.170644    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.170644    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.170644    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.184625    8968 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0910 19:56:49.184625    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.184625    8968 round_trippers.go:580]     Audit-Id: 60048603-659d-46aa-afad-549ffea23669
	I0910 19:56:49.184625    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.184625    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.184986    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.184986    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.184986    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.186337    8968 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1668"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90582 chars]
	I0910 19:56:49.190549    8968 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace to be "Ready" ...
	I0910 19:56:49.190610    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:56:49.190610    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.190610    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.190610    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.196148    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:56:49.196148    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.196148    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.196148    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.196148    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.196148    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.196148    8968 round_trippers.go:580]     Audit-Id: 30588b13-3bb3-4aaf-ba35-0a4ab5b59092
	I0910 19:56:49.196148    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.197133    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:56:49.197133    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:49.197133    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.197133    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.197133    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.200742    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:56:49.201078    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.201078    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.201078    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.201078    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.201078    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.201078    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.201078    8968 round_trippers.go:580]     Audit-Id: 1aabc449-22db-460e-ac73-1f108c2b2f96
	I0910 19:56:49.201324    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1640","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0910 19:56:49.201324    8968 pod_ready.go:98] node "multinode-629100" hosting pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:49.201324    8968 pod_ready.go:82] duration metric: took 10.7127ms for pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace to be "Ready" ...
	E0910 19:56:49.201324    8968 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-629100" hosting pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:49.201324    8968 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:56:49.201860    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-629100
	I0910 19:56:49.201896    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.201896    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.201896    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.203478    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 19:56:49.203478    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.203478    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.203478    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.203478    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.204469    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.204469    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.204469    8968 round_trippers.go:580]     Audit-Id: 070618bf-0694-4d32-a725-f9b938f84315
	I0910 19:56:49.204658    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-629100","namespace":"kube-system","uid":"2e6b0a6d-5705-427c-89e2-ff2b3ca25cb6","resourceVersion":"1661","creationTimestamp":"2024-09-10T19:56:47Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.215.172:2379","kubernetes.io/config.hash":"fe5e551808132ccc0e55277539c9741f","kubernetes.io/config.mirror":"fe5e551808132ccc0e55277539c9741f","kubernetes.io/config.seen":"2024-09-10T19:56:41.600229288Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:56:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6841 chars]
	I0910 19:56:49.205086    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:49.205147    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.205147    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.205147    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.206601    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 19:56:49.206601    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.206601    8968 round_trippers.go:580]     Audit-Id: 9f48826b-8f2b-4e54-8568-f3c35d420bcd
	I0910 19:56:49.206601    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.206601    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.206601    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.206601    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.206601    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.206601    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1640","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0910 19:56:49.207608    8968 pod_ready.go:98] node "multinode-629100" hosting pod "etcd-multinode-629100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:49.207608    8968 pod_ready.go:82] duration metric: took 6.2843ms for pod "etcd-multinode-629100" in "kube-system" namespace to be "Ready" ...
	E0910 19:56:49.207608    8968 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-629100" hosting pod "etcd-multinode-629100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:49.207608    8968 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:56:49.207608    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-629100
	I0910 19:56:49.207608    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.207608    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.207608    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.209602    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 19:56:49.209602    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.209602    8968 round_trippers.go:580]     Audit-Id: aec56f80-9d90-49e5-bed0-0f85ee99fb57
	I0910 19:56:49.209602    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.209602    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.209602    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.209602    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.209602    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.210587    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-629100","namespace":"kube-system","uid":"5d262302-6bdf-4e67-bf9f-4b50d8f9fbd5","resourceVersion":"1660","creationTimestamp":"2024-09-10T19:56:47Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.215.172:8443","kubernetes.io/config.hash":"0a34f2a2bc072815a118e734662d14b6","kubernetes.io/config.mirror":"0a34f2a2bc072815a118e734662d14b6","kubernetes.io/config.seen":"2024-09-10T19:56:41.591483300Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:56:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8293 chars]
	I0910 19:56:49.210587    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:49.210587    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.210587    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.210587    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.212627    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:56:49.212627    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.212627    8968 round_trippers.go:580]     Audit-Id: 97ced73a-fd63-47bb-bb91-c7b109fb85bc
	I0910 19:56:49.212627    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.212627    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.212627    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.212627    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.212627    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.213585    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1640","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0910 19:56:49.213585    8968 pod_ready.go:98] node "multinode-629100" hosting pod "kube-apiserver-multinode-629100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:49.213585    8968 pod_ready.go:82] duration metric: took 5.9763ms for pod "kube-apiserver-multinode-629100" in "kube-system" namespace to be "Ready" ...
	E0910 19:56:49.213585    8968 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-629100" hosting pod "kube-apiserver-multinode-629100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:49.213585    8968 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:56:49.213585    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-629100
	I0910 19:56:49.213585    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.213585    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.213585    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.215614    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:56:49.215614    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.215614    8968 round_trippers.go:580]     Audit-Id: e1ce9c3a-875a-4aa7-aa58-4e1fe654173f
	I0910 19:56:49.215614    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.215614    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.215614    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.215614    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.215614    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.216610    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-629100","namespace":"kube-system","uid":"60adcb0c-808a-477c-9432-83dc8f96d6c0","resourceVersion":"1648","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2a7614069b6cbf30e1abf0d1e34a8a71","kubernetes.io/config.mirror":"2a7614069b6cbf30e1abf0d1e34a8a71","kubernetes.io/config.seen":"2024-09-10T19:35:40.972009282Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7735 chars]
	I0910 19:56:49.229665    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:49.229944    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.229944    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.229944    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.232597    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:56:49.232597    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.232597    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.232597    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.232597    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.233123    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.233123    8968 round_trippers.go:580]     Audit-Id: 7d6969ee-c088-41fc-8d43-157108eae7d8
	I0910 19:56:49.233123    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.233316    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1640","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0910 19:56:49.233947    8968 pod_ready.go:98] node "multinode-629100" hosting pod "kube-controller-manager-multinode-629100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:49.234020    8968 pod_ready.go:82] duration metric: took 20.434ms for pod "kube-controller-manager-multinode-629100" in "kube-system" namespace to be "Ready" ...
	E0910 19:56:49.234020    8968 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-629100" hosting pod "kube-controller-manager-multinode-629100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:49.234020    8968 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4tzx6" in "kube-system" namespace to be "Ready" ...
	I0910 19:56:49.435280    8968 request.go:632] Waited for 200.908ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tzx6
	I0910 19:56:49.435280    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tzx6
	I0910 19:56:49.435552    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.435552    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.435552    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.438149    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:56:49.438149    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.438149    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.438149    8968 round_trippers.go:580]     Audit-Id: 88339815-590c-4b5a-91e4-0a88ab399b0d
	I0910 19:56:49.438149    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.438149    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.438149    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.438149    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.439515    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4tzx6","generateName":"kube-proxy-","namespace":"kube-system","uid":"9bb18c28-3ee9-4028-a61d-3d7f6ea31894","resourceVersion":"1613","creationTimestamp":"2024-09-10T19:42:55Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:42:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6433 chars]
	I0910 19:56:49.640956    8968 request.go:632] Waited for 200.5808ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 19:56:49.641544    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 19:56:49.641544    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.641652    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.641652    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.645006    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:56:49.645006    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.645006    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.645006    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.645006    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.645006    8968 round_trippers.go:580]     Audit-Id: 610818d5-79b1-4fab-93a9-69606dada142
	I0910 19:56:49.645006    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.645006    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.645619    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"39f4665c-e276-4967-a647-46b3a9a1c67a","resourceVersion":"1621","creationTimestamp":"2024-09-10T19:52:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_52_30_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4394 chars]
	I0910 19:56:49.645910    8968 pod_ready.go:98] node "multinode-629100-m03" hosting pod "kube-proxy-4tzx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100-m03" has status "Ready":"Unknown"
	I0910 19:56:49.645910    8968 pod_ready.go:82] duration metric: took 411.8629ms for pod "kube-proxy-4tzx6" in "kube-system" namespace to be "Ready" ...
	E0910 19:56:49.645910    8968 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-629100-m03" hosting pod "kube-proxy-4tzx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100-m03" has status "Ready":"Unknown"
	I0910 19:56:49.646446    8968 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qqrrg" in "kube-system" namespace to be "Ready" ...
	I0910 19:56:49.829425    8968 request.go:632] Waited for 182.6554ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqrrg
	I0910 19:56:49.829921    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqrrg
	I0910 19:56:49.830132    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.830132    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.830223    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.833102    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:56:49.833102    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.833102    8968 round_trippers.go:580]     Audit-Id: f90a6a81-f7ba-4bae-95fa-61a9e378eed4
	I0910 19:56:49.833102    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.833504    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.833504    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.833572    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.833572    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:50 GMT
	I0910 19:56:49.833919    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qqrrg","generateName":"kube-proxy-","namespace":"kube-system","uid":"1fc7fdda-d5e4-4c72-96c1-2348eb72b491","resourceVersion":"580","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0910 19:56:50.033123    8968 request.go:632] Waited for 199.0453ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:56:50.033123    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:56:50.033369    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:50.033369    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:50.033369    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:50.038790    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:56:50.039102    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:50.039138    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:50.039138    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:50.039138    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:50.039138    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:50 GMT
	I0910 19:56:50.039138    8968 round_trippers.go:580]     Audit-Id: f2939d62-aab8-4279-8dc7-4f7fa6f5986e
	I0910 19:56:50.039138    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:50.039331    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"1311","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3819 chars]
	I0910 19:56:50.039521    8968 pod_ready.go:93] pod "kube-proxy-qqrrg" in "kube-system" namespace has status "Ready":"True"
	I0910 19:56:50.039521    8968 pod_ready.go:82] duration metric: took 393.0491ms for pod "kube-proxy-qqrrg" in "kube-system" namespace to be "Ready" ...
	I0910 19:56:50.039521    8968 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wqf2d" in "kube-system" namespace to be "Ready" ...
	I0910 19:56:50.237594    8968 request.go:632] Waited for 197.8807ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wqf2d
	I0910 19:56:50.237877    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wqf2d
	I0910 19:56:50.237990    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:50.238051    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:50.238051    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:50.240645    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:56:50.241083    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:50.241083    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:50.241083    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:50.241083    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:50.241083    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:50.241083    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:50 GMT
	I0910 19:56:50.241083    8968 round_trippers.go:580]     Audit-Id: 0eaa324e-f4cc-4973-84b9-4f56008eb347
	I0910 19:56:50.241580    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wqf2d","generateName":"kube-proxy-","namespace":"kube-system","uid":"27e7846e-e506-48e0-96b9-351b3ca91703","resourceVersion":"1662","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6405 chars]
	I0910 19:56:50.441118    8968 request.go:632] Waited for 198.7028ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:50.441118    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:50.441118    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:50.441571    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:50.441571    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:50.444669    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:56:50.445101    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:50.445101    8968 round_trippers.go:580]     Audit-Id: 36628f11-df31-4ec0-a36b-20c8fc25aa32
	I0910 19:56:50.445238    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:50.445238    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:50.445238    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:50.445238    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:50.445238    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:50 GMT
	I0910 19:56:50.445486    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:50.446015    8968 pod_ready.go:98] node "multinode-629100" hosting pod "kube-proxy-wqf2d" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:50.446237    8968 pod_ready.go:82] duration metric: took 406.6894ms for pod "kube-proxy-wqf2d" in "kube-system" namespace to be "Ready" ...
	E0910 19:56:50.446237    8968 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-629100" hosting pod "kube-proxy-wqf2d" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:50.446237    8968 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:56:50.644397    8968 request.go:632] Waited for 197.9545ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629100
	I0910 19:56:50.644705    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629100
	I0910 19:56:50.644705    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:50.644705    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:50.644705    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:50.648594    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:56:50.648594    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:50.648594    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:50.648594    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:50.648594    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:50 GMT
	I0910 19:56:50.648594    8968 round_trippers.go:580]     Audit-Id: d75c8e27-365c-4745-accc-36cdaff81962
	I0910 19:56:50.648594    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:50.648594    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:50.649010    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-629100","namespace":"kube-system","uid":"fe93d1a3-98c4-44c9-b1fb-9d90a97b97c7","resourceVersion":"1651","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4f54b0774708cd0189f8ef861ea931f0","kubernetes.io/config.mirror":"4f54b0774708cd0189f8ef861ea931f0","kubernetes.io/config.seen":"2024-09-10T19:35:40.972010383Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0910 19:56:50.831389    8968 request.go:632] Waited for 181.395ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:50.831671    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:50.831671    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:50.831671    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:50.831734    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:50.834450    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:56:50.835140    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:50.835140    8968 round_trippers.go:580]     Audit-Id: a9120c8b-57d6-4483-9cad-bc9469c62666
	I0910 19:56:50.835140    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:50.835140    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:50.835140    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:50.835140    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:50.835140    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:51 GMT
	I0910 19:56:50.835332    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:50.835403    8968 pod_ready.go:98] node "multinode-629100" hosting pod "kube-scheduler-multinode-629100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:50.835403    8968 pod_ready.go:82] duration metric: took 389.1401ms for pod "kube-scheduler-multinode-629100" in "kube-system" namespace to be "Ready" ...
	E0910 19:56:50.835403    8968 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-629100" hosting pod "kube-scheduler-multinode-629100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:50.835403    8968 pod_ready.go:39] duration metric: took 1.6646493s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:56:50.835403    8968 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 19:56:50.852477    8968 command_runner.go:130] > -16
	I0910 19:56:50.852626    8968 ops.go:34] apiserver oom_adj: -16
	I0910 19:56:50.852626    8968 kubeadm.go:597] duration metric: took 11.5389757s to restartPrimaryControlPlane
	I0910 19:56:50.852626    8968 kubeadm.go:394] duration metric: took 11.5997803s to StartCluster
	I0910 19:56:50.852626    8968 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:56:50.852626    8968 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 19:56:50.854513    8968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:56:50.856195    8968 start.go:235] Will wait 6m0s for node &{Name: IP:172.31.215.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 19:56:50.856195    8968 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 19:56:50.856907    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:56:50.862520    8968 out.go:177] * Verifying Kubernetes components...
	I0910 19:56:50.869517    8968 out.go:177] * Enabled addons: 
	I0910 19:56:50.874529    8968 addons.go:510] duration metric: took 18.332ms for enable addons: enabled=[]
	I0910 19:56:50.880774    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:56:51.108147    8968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:56:51.130588    8968 node_ready.go:35] waiting up to 6m0s for node "multinode-629100" to be "Ready" ...
	I0910 19:56:51.130758    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:51.130758    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:51.130758    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:51.130758    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:51.137143    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:56:51.137143    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:51.137143    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:51.137143    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:51.137143    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:51 GMT
	I0910 19:56:51.137143    8968 round_trippers.go:580]     Audit-Id: 5ba524b0-31fe-472b-a3ef-ac53951fa9f0
	I0910 19:56:51.137143    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:51.137143    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:51.137791    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:51.631851    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:51.631940    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:51.631940    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:51.631940    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:51.638814    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:56:51.638814    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:51.638814    8968 round_trippers.go:580]     Audit-Id: 0a9f5919-461f-4f17-857e-bb49bd84b23f
	I0910 19:56:51.638814    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:51.638814    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:51.638814    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:51.638814    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:51.638814    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:51 GMT
	I0910 19:56:51.638814    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:52.133153    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:52.133153    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:52.133153    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:52.133153    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:52.137001    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:56:52.137060    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:52.137060    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:52 GMT
	I0910 19:56:52.137060    8968 round_trippers.go:580]     Audit-Id: b4823266-9636-4448-b6da-23dc33e10017
	I0910 19:56:52.137060    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:52.137060    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:52.137121    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:52.137121    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:52.137177    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:52.631980    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:52.632043    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:52.632101    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:52.632101    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:52.638857    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:56:52.638857    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:52.638857    8968 round_trippers.go:580]     Audit-Id: 37b1bdda-30aa-4b7d-bf4c-4c03a0cab9b5
	I0910 19:56:52.638857    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:52.638857    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:52.638857    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:52.638857    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:52.638857    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:52 GMT
	I0910 19:56:52.638857    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:53.134125    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:53.134188    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:53.134188    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:53.134188    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:53.137674    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:56:53.137743    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:53.137743    8968 round_trippers.go:580]     Audit-Id: b09ba073-af41-4339-84d4-adc646a8a339
	I0910 19:56:53.137743    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:53.137743    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:53.137743    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:53.137743    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:53.137743    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:53 GMT
	I0910 19:56:53.138203    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:53.138874    8968 node_ready.go:53] node "multinode-629100" has status "Ready":"False"
	I0910 19:56:53.641174    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:53.641174    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:53.641174    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:53.641174    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:53.647292    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:56:53.647292    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:53.647292    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:53.647292    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:53.647292    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:53.647292    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:53.647292    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:53 GMT
	I0910 19:56:53.647292    8968 round_trippers.go:580]     Audit-Id: 1b4a2e1f-d521-4bd9-a523-27ee209ada10
	I0910 19:56:53.647858    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:54.139744    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:54.139744    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:54.139744    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:54.139744    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:54.143882    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:56:54.143969    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:54.144029    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:54.144029    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:54.144029    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:54.144029    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:54.144083    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:54 GMT
	I0910 19:56:54.144083    8968 round_trippers.go:580]     Audit-Id: 379a870c-4731-488d-80ea-d974746bc082
	I0910 19:56:54.144462    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:54.638543    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:54.638543    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:54.638543    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:54.638543    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:54.642468    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:56:54.642468    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:54.642468    8968 round_trippers.go:580]     Audit-Id: 66a50367-084b-4ed6-8acb-b459d3ace642
	I0910 19:56:54.642468    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:54.642468    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:54.642468    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:54.642468    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:54.642468    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:54 GMT
	I0910 19:56:54.642468    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:55.141354    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:55.141443    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:55.141443    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:55.141535    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:55.144594    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:56:55.144676    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:55.144676    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:55.144676    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:55 GMT
	I0910 19:56:55.144739    8968 round_trippers.go:580]     Audit-Id: 9df6f54f-d6b0-48bc-9edf-0006fea95abc
	I0910 19:56:55.144739    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:55.144739    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:55.144739    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:55.145047    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:55.145625    8968 node_ready.go:53] node "multinode-629100" has status "Ready":"False"
	I0910 19:56:55.637989    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:55.638050    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:55.638050    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:55.638050    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:55.643593    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:56:55.643593    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:55.643593    8968 round_trippers.go:580]     Audit-Id: eab64fe0-d630-451a-bf35-042dfc92114f
	I0910 19:56:55.643593    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:55.643593    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:55.643593    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:55.643593    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:55.643593    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:55 GMT
	I0910 19:56:55.644283    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:56.140105    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:56.140175    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:56.140175    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:56.140175    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:56.147767    8968 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:56:56.147836    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:56.147836    8968 round_trippers.go:580]     Audit-Id: b95ae3ab-2671-46e3-9bdb-e3e412cf819d
	I0910 19:56:56.147836    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:56.147836    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:56.147836    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:56.147836    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:56.147836    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:56 GMT
	I0910 19:56:56.147836    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:56.635950    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:56.636022    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:56.636022    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:56.636022    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:56.639115    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:56:56.639115    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:56.639115    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:56.639115    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:56.639115    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:56.639115    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:56 GMT
	I0910 19:56:56.639115    8968 round_trippers.go:580]     Audit-Id: a3b0ba40-1882-4975-a54b-d8e5dd574b26
	I0910 19:56:56.639115    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:56.639509    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:57.136806    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:57.136873    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:57.136873    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:57.136873    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:57.143717    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:56:57.143717    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:57.143717    8968 round_trippers.go:580]     Audit-Id: 5ec287db-d568-41e8-857c-673761bafc15
	I0910 19:56:57.143717    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:57.143717    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:57.143717    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:57.143717    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:57.143717    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:57 GMT
	I0910 19:56:57.144539    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:57.633592    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:57.633913    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:57.633913    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:57.633913    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:57.640009    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:56:57.640009    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:57.640009    8968 round_trippers.go:580]     Audit-Id: 4ec62175-9d1c-4b54-af84-280804b1cf3a
	I0910 19:56:57.640009    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:57.640009    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:57.640009    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:57.640009    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:57.640009    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:57 GMT
	I0910 19:56:57.640009    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:57.641359    8968 node_ready.go:53] node "multinode-629100" has status "Ready":"False"
	I0910 19:56:58.135035    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:58.135107    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:58.135107    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:58.135107    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:58.139609    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:56:58.139676    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:58.139740    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:58.139740    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:58.139740    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:58 GMT
	I0910 19:56:58.139740    8968 round_trippers.go:580]     Audit-Id: 503ec55e-b23f-48e3-ad14-157b56843cf9
	I0910 19:56:58.139740    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:58.139740    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:58.140208    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:58.633187    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:58.633276    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:58.633352    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:58.633352    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:58.636949    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:56:58.637085    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:58.637085    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:58.637085    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:58.637191    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:58 GMT
	I0910 19:56:58.637191    8968 round_trippers.go:580]     Audit-Id: 7d791441-817d-4ce0-be03-32b2ccefe611
	I0910 19:56:58.637191    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:58.637191    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:58.637443    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:59.134544    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:59.134620    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:59.134692    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:59.134692    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:59.138350    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:56:59.138425    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:59.138425    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:59.138425    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:59 GMT
	I0910 19:56:59.138425    8968 round_trippers.go:580]     Audit-Id: 1dc257f7-72c5-47b9-b154-63792293e7a5
	I0910 19:56:59.138493    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:59.138493    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:59.138493    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:59.139096    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:59.633419    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:59.633519    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:59.633607    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:59.633607    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:59.641920    8968 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 19:56:59.641920    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:59.641920    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:59.641920    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:59 GMT
	I0910 19:56:59.641920    8968 round_trippers.go:580]     Audit-Id: f59364d6-90db-4736-bd70-08c28e04e7c3
	I0910 19:56:59.641920    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:59.641920    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:59.641920    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:59.641920    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:59.643175    8968 node_ready.go:53] node "multinode-629100" has status "Ready":"False"
	I0910 19:57:00.135059    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:00.135059    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:00.135059    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:00.135059    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:00.140067    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:57:00.140067    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:00.140067    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:00.140067    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:00.140067    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:00 GMT
	I0910 19:57:00.140067    8968 round_trippers.go:580]     Audit-Id: 9c6a2367-c266-476a-9773-2ac19254fe4a
	I0910 19:57:00.141072    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:00.141072    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:00.141072    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:57:00.634563    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:00.634563    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:00.634563    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:00.634563    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:00.637175    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:00.637175    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:00.637175    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:00.637175    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:00.637175    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:00.637175    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:00.638195    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:00 GMT
	I0910 19:57:00.638195    8968 round_trippers.go:580]     Audit-Id: 847abe16-33be-4e40-be91-2962afa4411d
	I0910 19:57:00.638288    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:57:01.133641    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:01.133712    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:01.133784    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:01.133784    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:01.137461    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:01.137461    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:01.137461    8968 round_trippers.go:580]     Audit-Id: f1f0bf41-87a3-4706-b193-b4b89590a6e0
	I0910 19:57:01.137461    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:01.138001    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:01.138001    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:01.138001    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:01.138001    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:01 GMT
	I0910 19:57:01.138450    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:57:01.634031    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:01.634031    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:01.634031    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:01.634031    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:01.638655    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:01.638655    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:01.638655    8968 round_trippers.go:580]     Audit-Id: 0d6bc5ef-8af3-4790-8c90-b7e06dc933d0
	I0910 19:57:01.638655    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:01.638655    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:01.638655    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:01.638655    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:01.638655    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:01 GMT
	I0910 19:57:01.638981    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:57:02.132646    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:02.133079    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:02.133079    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:02.133202    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:02.136368    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:02.136368    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:02.136368    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:02.136982    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:02.136982    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:02.136982    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:02 GMT
	I0910 19:57:02.136982    8968 round_trippers.go:580]     Audit-Id: 89dfd954-7388-4011-940e-7f0cf59cc252
	I0910 19:57:02.136982    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:02.137291    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:57:02.137770    8968 node_ready.go:53] node "multinode-629100" has status "Ready":"False"
	I0910 19:57:02.635339    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:02.635339    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:02.635339    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:02.635339    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:02.637923    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:02.637923    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:02.637923    8968 round_trippers.go:580]     Audit-Id: 5c856fa1-2dc2-4213-bdd2-5bf738880af1
	I0910 19:57:02.637923    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:02.637923    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:02.637923    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:02.638334    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:02.638334    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:02 GMT
	I0910 19:57:02.638451    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1776","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5351 chars]
	I0910 19:57:02.639518    8968 node_ready.go:49] node "multinode-629100" has status "Ready":"True"
	I0910 19:57:02.639572    8968 node_ready.go:38] duration metric: took 11.5081312s for node "multinode-629100" to be "Ready" ...
	I0910 19:57:02.639572    8968 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:57:02.639795    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods
	I0910 19:57:02.639861    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:02.639923    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:02.639923    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:02.644089    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:02.644648    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:02.644648    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:02.644717    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:02.644717    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:02.644787    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:02 GMT
	I0910 19:57:02.644787    8968 round_trippers.go:580]     Audit-Id: 595c607e-99a5-43d1-8b95-fec833029bd3
	I0910 19:57:02.644787    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:02.646590    8968 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1776"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89610 chars]
	I0910 19:57:02.650933    8968 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:02.650933    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:02.650933    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:02.650933    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:02.650933    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:02.657435    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:57:02.657435    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:02.657435    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:02.657435    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:02.657435    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:02 GMT
	I0910 19:57:02.657435    8968 round_trippers.go:580]     Audit-Id: 36ce0d26-1a3e-4d92-805b-902e35b3b2ba
	I0910 19:57:02.657435    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:02.657435    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:02.657774    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:02.658296    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:02.658366    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:02.658366    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:02.658366    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:02.659933    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 19:57:02.659933    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:02.659933    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:02 GMT
	I0910 19:57:02.659933    8968 round_trippers.go:580]     Audit-Id: 5f259b94-e7b3-44a4-a9e8-aedef89105c0
	I0910 19:57:02.659933    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:02.659933    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:02.659933    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:02.659933    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:02.660917    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1776","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5351 chars]
	I0910 19:57:03.152919    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:03.152990    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:03.152990    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:03.152990    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:03.159846    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:57:03.159846    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:03.159846    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:03.159846    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:03.159846    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:03.159846    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:03 GMT
	I0910 19:57:03.159846    8968 round_trippers.go:580]     Audit-Id: 0f7b3891-6c5c-4eeb-90cb-1182fa12c93b
	I0910 19:57:03.159846    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:03.159846    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:03.161990    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:03.162095    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:03.162145    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:03.162145    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:03.165259    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:03.165259    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:03.165259    8968 round_trippers.go:580]     Audit-Id: 40049601-b492-442c-84e3-5bff85588b73
	I0910 19:57:03.165259    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:03.165259    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:03.165259    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:03.165259    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:03.165259    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:03 GMT
	I0910 19:57:03.165259    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1776","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5351 chars]
	I0910 19:57:03.653111    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:03.653175    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:03.653239    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:03.653239    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:03.657009    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:03.657009    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:03.657009    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:03.657009    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:03 GMT
	I0910 19:57:03.657009    8968 round_trippers.go:580]     Audit-Id: e56df73a-0297-439c-9d28-3145bb5b3e23
	I0910 19:57:03.657009    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:03.657382    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:03.657382    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:03.657612    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:03.658496    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:03.658496    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:03.658496    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:03.658496    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:03.665630    8968 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:57:03.666162    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:03.666162    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:03.666162    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:03.666162    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:03 GMT
	I0910 19:57:03.666202    8968 round_trippers.go:580]     Audit-Id: 94443d94-b56e-46c7-a5f2-670871731336
	I0910 19:57:03.666202    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:03.666202    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:03.667688    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1776","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5351 chars]
	I0910 19:57:04.166185    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:04.166185    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:04.166185    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:04.166295    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:04.170560    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:04.170560    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:04.170676    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:04.170676    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:04.170676    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:04.170676    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:04 GMT
	I0910 19:57:04.170676    8968 round_trippers.go:580]     Audit-Id: b0199659-fd08-4dd9-9a7b-be9e0dc2a8bd
	I0910 19:57:04.170676    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:04.170962    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:04.171926    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:04.171926    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:04.171926    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:04.172047    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:04.174348    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:04.174348    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:04.175041    8968 round_trippers.go:580]     Audit-Id: 37a63722-2733-43a2-ab61-3cc692c0f2dd
	I0910 19:57:04.175041    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:04.175041    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:04.175041    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:04.175041    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:04.175041    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:04 GMT
	I0910 19:57:04.175253    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1776","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5351 chars]
	I0910 19:57:04.665724    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:04.665724    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:04.665724    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:04.665724    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:04.669283    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:04.670301    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:04.670367    8968 round_trippers.go:580]     Audit-Id: c4f8a826-aa15-4c44-a37e-bea5d938009b
	I0910 19:57:04.670367    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:04.670367    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:04.670367    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:04.670428    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:04.670428    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:04 GMT
	I0910 19:57:04.670990    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:04.671976    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:04.672075    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:04.672075    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:04.672075    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:04.676127    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:04.676127    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:04.676127    8968 round_trippers.go:580]     Audit-Id: de74ce9d-6a94-4f49-a844-4308649e54b5
	I0910 19:57:04.676127    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:04.676127    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:04.676127    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:04.676127    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:04.676127    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:04 GMT
	I0910 19:57:04.677076    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1776","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5351 chars]
	I0910 19:57:04.677076    8968 pod_ready.go:103] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"False"
	I0910 19:57:05.154213    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:05.154213    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:05.154213    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:05.154213    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:05.158116    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:05.158116    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:05.158116    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:05.158116    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:05.158116    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:05.158116    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:05 GMT
	I0910 19:57:05.158116    8968 round_trippers.go:580]     Audit-Id: f72016e0-7ad5-4d4e-9980-01c9b3ee9b13
	I0910 19:57:05.158116    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:05.158116    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:05.159582    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:05.159582    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:05.159704    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:05.159704    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:05.162059    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:05.162059    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:05.162059    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:05.162059    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:05.162059    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:05.162059    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:05.162059    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:05 GMT
	I0910 19:57:05.162059    8968 round_trippers.go:580]     Audit-Id: 34c5d5df-9594-4789-8634-d5e961ac34d2
	I0910 19:57:05.163330    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:05.655895    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:05.655895    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:05.655895    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:05.655895    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:05.659453    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:05.659708    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:05.659708    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:05 GMT
	I0910 19:57:05.659708    8968 round_trippers.go:580]     Audit-Id: d879c180-0592-49b7-b07e-4ad2562050d2
	I0910 19:57:05.659708    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:05.659708    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:05.659708    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:05.659708    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:05.659870    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:05.661144    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:05.661219    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:05.661219    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:05.661219    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:05.663516    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:05.663516    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:05.663516    8968 round_trippers.go:580]     Audit-Id: e7ab7840-5f70-4f14-9630-6a052ba7cfd0
	I0910 19:57:05.664175    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:05.664175    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:05.664175    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:05.664175    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:05.664175    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:05 GMT
	I0910 19:57:05.664471    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:06.162728    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:06.162795    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:06.162795    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:06.162849    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:06.166960    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:06.167022    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:06.167022    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:06.167022    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:06.167101    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:06.167101    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:06.167101    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:06 GMT
	I0910 19:57:06.167101    8968 round_trippers.go:580]     Audit-Id: 074c168c-e064-4a57-a1ba-99babc6bd23d
	I0910 19:57:06.167517    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:06.168629    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:06.168629    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:06.168714    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:06.168714    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:06.172868    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:06.172868    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:06.172868    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:06.172868    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:06.172868    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:06.172868    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:06 GMT
	I0910 19:57:06.172868    8968 round_trippers.go:580]     Audit-Id: a544011e-f90a-4764-8591-097da25fc39a
	I0910 19:57:06.172868    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:06.172868    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:06.660918    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:06.660918    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:06.660918    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:06.660918    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:06.665601    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:06.665601    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:06.665601    8968 round_trippers.go:580]     Audit-Id: 87a03276-235e-4123-8929-7f3dc8cc5edf
	I0910 19:57:06.665601    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:06.665601    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:06.665601    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:06.665601    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:06.665601    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:06 GMT
	I0910 19:57:06.665601    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:06.667104    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:06.667104    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:06.667162    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:06.667162    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:06.669723    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:06.670634    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:06.670634    8968 round_trippers.go:580]     Audit-Id: e8ddae4c-0866-4407-a182-835791a5a2b8
	I0910 19:57:06.670634    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:06.670663    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:06.670663    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:06.670663    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:06.670663    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:06 GMT
	I0910 19:57:06.670663    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:07.161575    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:07.161671    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:07.161671    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:07.161747    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:07.167076    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:07.167076    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:07.167076    8968 round_trippers.go:580]     Audit-Id: c4b102f2-53c0-43c1-b4bb-baa59cb464da
	I0910 19:57:07.167076    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:07.167076    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:07.167076    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:07.167076    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:07.167076    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:07 GMT
	I0910 19:57:07.167325    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:07.167965    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:07.167965    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:07.167965    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:07.167965    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:07.170194    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:07.170435    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:07.170435    8968 round_trippers.go:580]     Audit-Id: 45c9cda7-5749-460c-aabf-d8a40002e471
	I0910 19:57:07.170435    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:07.170435    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:07.170435    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:07.170435    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:07.170435    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:07 GMT
	I0910 19:57:07.171081    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:07.171788    8968 pod_ready.go:103] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"False"
	I0910 19:57:07.663731    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:07.663795    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:07.663943    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:07.663943    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:07.667382    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:07.667382    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:07.667382    8968 round_trippers.go:580]     Audit-Id: 6fc423df-0aa7-4b9c-bf28-4634cc56a4d3
	I0910 19:57:07.667382    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:07.667382    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:07.667382    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:07.667382    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:07.667382    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:07 GMT
	I0910 19:57:07.668421    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:07.669076    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:07.669165    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:07.669165    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:07.669165    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:07.672311    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:07.672311    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:07.672311    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:07 GMT
	I0910 19:57:07.672311    8968 round_trippers.go:580]     Audit-Id: 56018a0f-7b0c-4fa9-a0a2-4d1ac2e7798b
	I0910 19:57:07.672311    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:07.672311    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:07.672311    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:07.672311    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:07.673391    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:08.162015    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:08.162015    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:08.162015    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:08.162134    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:08.167851    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:57:08.167851    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:08.167851    8968 round_trippers.go:580]     Audit-Id: df7f25ab-7dc2-4e72-bde5-691f76002fdc
	I0910 19:57:08.167851    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:08.167851    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:08.167851    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:08.167851    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:08.167851    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:08 GMT
	I0910 19:57:08.168481    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:08.169338    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:08.169338    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:08.169338    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:08.169338    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:08.172493    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:08.172493    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:08.172493    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:08 GMT
	I0910 19:57:08.172493    8968 round_trippers.go:580]     Audit-Id: 1ccd31f0-9804-495a-97db-c3f27e17340e
	I0910 19:57:08.172493    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:08.172493    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:08.172493    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:08.172493    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:08.173115    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:08.665445    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:08.665445    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:08.665445    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:08.665555    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:08.674321    8968 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 19:57:08.674321    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:08.674321    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:08.674321    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:08.674321    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:08.674321    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:08 GMT
	I0910 19:57:08.674321    8968 round_trippers.go:580]     Audit-Id: e5c3d861-5957-413b-b65e-e7692573c8f4
	I0910 19:57:08.674321    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:08.674321    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:08.675624    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:08.675715    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:08.675715    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:08.675715    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:08.678633    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:08.678696    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:08.678696    8968 round_trippers.go:580]     Audit-Id: 92cc3c85-c723-42a2-b974-b3bdde4ef8f6
	I0910 19:57:08.678696    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:08.678696    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:08.678696    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:08.678696    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:08.678696    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:08 GMT
	I0910 19:57:08.678696    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:09.166401    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:09.166470    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:09.166470    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:09.166470    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:09.173896    8968 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:57:09.173896    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:09.173896    8968 round_trippers.go:580]     Audit-Id: 33c30ae4-c3e7-4e32-abff-338ce8d4e9fe
	I0910 19:57:09.173896    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:09.173896    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:09.173896    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:09.173896    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:09.173896    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:09 GMT
	I0910 19:57:09.173896    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:09.174911    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:09.174911    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:09.174911    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:09.174911    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:09.178137    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:09.178137    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:09.178137    8968 round_trippers.go:580]     Audit-Id: 4b47d08b-9048-4e2a-a132-39af04121df0
	I0910 19:57:09.178137    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:09.178137    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:09.179064    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:09.179064    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:09.179064    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:09 GMT
	I0910 19:57:09.179256    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:09.179628    8968 pod_ready.go:103] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"False"
	I0910 19:57:09.663018    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:09.663139    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:09.663139    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:09.663139    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:09.667682    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:09.667682    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:09.667682    8968 round_trippers.go:580]     Audit-Id: 538ed9ae-ca46-4e9f-8fb5-6360f752f073
	I0910 19:57:09.667796    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:09.667796    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:09.667796    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:09.667796    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:09.667796    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:09 GMT
	I0910 19:57:09.667872    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:09.669110    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:09.669110    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:09.669110    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:09.669208    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:09.671579    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:09.671579    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:09.671579    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:09.671579    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:09.671579    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:09.671579    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:09.671579    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:09 GMT
	I0910 19:57:09.672291    8968 round_trippers.go:580]     Audit-Id: 5f435f97-a27c-450c-97ef-a1fc16d2eba2
	I0910 19:57:09.672771    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:10.162577    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:10.162577    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:10.162679    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:10.162679    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:10.168925    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:57:10.168925    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:10.168925    8968 round_trippers.go:580]     Audit-Id: 908d4d31-5558-463b-aa8e-ad1240ae1d18
	I0910 19:57:10.168925    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:10.168925    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:10.168925    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:10.168925    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:10.168925    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:10 GMT
	I0910 19:57:10.169552    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:10.170274    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:10.170274    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:10.170274    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:10.170274    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:10.173478    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:10.173478    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:10.173478    8968 round_trippers.go:580]     Audit-Id: 8bb03296-54bc-43dc-92e0-8686cde45563
	I0910 19:57:10.173478    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:10.173478    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:10.173478    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:10.173478    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:10.173478    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:10 GMT
	I0910 19:57:10.174021    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:10.664901    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:10.664963    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:10.664963    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:10.664963    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:10.668405    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:10.668405    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:10.668405    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:10.668405    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:10.668405    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:10.668405    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:10 GMT
	I0910 19:57:10.668405    8968 round_trippers.go:580]     Audit-Id: 4739744b-3f03-4c6f-b194-1dbd22c70c0b
	I0910 19:57:10.668405    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:10.668405    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:10.669414    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:10.669414    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:10.669414    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:10.669414    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:10.671996    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:10.671996    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:10.671996    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:10.671996    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:10.672289    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:10.672289    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:10.672289    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:10 GMT
	I0910 19:57:10.672289    8968 round_trippers.go:580]     Audit-Id: ca76f3d8-e739-4b3e-bbfa-ec1bdfedaab3
	I0910 19:57:10.672702    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:11.164961    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:11.164961    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:11.164961    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:11.165331    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:11.170791    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:57:11.170791    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:11.170791    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:11.170791    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:11.170791    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:11.170791    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:11 GMT
	I0910 19:57:11.170791    8968 round_trippers.go:580]     Audit-Id: 2651ad92-522f-49f1-a35f-3bc1a182c90f
	I0910 19:57:11.170791    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:11.171324    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:11.171456    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:11.171456    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:11.171456    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:11.171456    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:11.174509    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:11.174509    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:11.174509    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:11.174509    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:11.174509    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:11 GMT
	I0910 19:57:11.174509    8968 round_trippers.go:580]     Audit-Id: fc5924de-a40c-4c49-96d1-ea521c321d69
	I0910 19:57:11.174509    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:11.174509    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:11.174509    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:11.664773    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:11.664868    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:11.664868    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:11.664868    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:11.668928    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:11.668928    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:11.668928    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:11.668928    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:11.668928    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:11 GMT
	I0910 19:57:11.668928    8968 round_trippers.go:580]     Audit-Id: fac53fba-93b1-498c-b103-553508a2bf5a
	I0910 19:57:11.668928    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:11.668928    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:11.669632    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:11.670430    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:11.670430    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:11.670541    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:11.670541    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:11.676792    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:57:11.677744    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:11.677744    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:11 GMT
	I0910 19:57:11.677744    8968 round_trippers.go:580]     Audit-Id: d5d3896d-9558-48bc-b53a-90de8aa7d16e
	I0910 19:57:11.677744    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:11.677744    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:11.677744    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:11.677744    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:11.677744    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:11.678320    8968 pod_ready.go:103] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"False"
	I0910 19:57:12.164258    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:12.164329    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:12.164399    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:12.164399    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:12.170506    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:57:12.170506    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:12.170506    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:12 GMT
	I0910 19:57:12.170506    8968 round_trippers.go:580]     Audit-Id: 9fc5a8ed-a075-45b7-af0a-48458ba11339
	I0910 19:57:12.170506    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:12.170506    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:12.170506    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:12.170506    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:12.170742    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:12.171466    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:12.171466    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:12.171466    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:12.171466    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:12.174026    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:12.174026    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:12.174927    8968 round_trippers.go:580]     Audit-Id: 6f46e104-723e-438a-9c6b-971abd82c1bd
	I0910 19:57:12.174927    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:12.174927    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:12.174927    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:12.174927    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:12.174927    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:12 GMT
	I0910 19:57:12.175389    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:12.664950    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:12.665007    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:12.665007    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:12.665007    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:12.671770    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:57:12.671770    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:12.672755    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:12.672755    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:12.672755    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:12.672755    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:12.672755    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:12 GMT
	I0910 19:57:12.672755    8968 round_trippers.go:580]     Audit-Id: b67cec37-a3f9-4b37-88d1-248a33b343f5
	I0910 19:57:12.672755    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:12.673515    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:12.673597    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:12.673597    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:12.673597    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:12.676674    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:12.676771    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:12.676831    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:12.676831    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:12 GMT
	I0910 19:57:12.676831    8968 round_trippers.go:580]     Audit-Id: ed6690ce-4983-4b73-bb3f-493a788d08a1
	I0910 19:57:12.676900    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:12.676900    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:12.676900    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:12.677241    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:13.167259    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:13.167259    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:13.167259    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:13.167259    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:13.171440    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:13.171440    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:13.171440    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:13 GMT
	I0910 19:57:13.171440    8968 round_trippers.go:580]     Audit-Id: 8ddc3fd2-6b33-4525-8f16-77a0ca6a7c56
	I0910 19:57:13.171440    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:13.171440    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:13.171440    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:13.171440    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:13.171719    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:13.172785    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:13.172785    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:13.172785    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:13.172785    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:13.175369    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:13.175369    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:13.175369    8968 round_trippers.go:580]     Audit-Id: f47b60f6-4dcc-47a1-8677-0f5bbbd24c7e
	I0910 19:57:13.175369    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:13.175369    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:13.176293    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:13.176293    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:13.176293    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:13 GMT
	I0910 19:57:13.176341    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:13.665292    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:13.665292    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:13.665292    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:13.665292    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:13.668490    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:13.669523    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:13.669571    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:13 GMT
	I0910 19:57:13.669571    8968 round_trippers.go:580]     Audit-Id: 201f37da-e4a6-4ef2-a216-6c88d4c9b391
	I0910 19:57:13.669571    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:13.669571    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:13.669571    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:13.669571    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:13.669571    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:13.670209    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:13.670209    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:13.670338    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:13.670338    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:13.672693    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:13.673076    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:13.673141    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:13.673141    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:13.673141    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:13.673141    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:13.673141    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:13 GMT
	I0910 19:57:13.673141    8968 round_trippers.go:580]     Audit-Id: 2a1a7c49-1e0e-4df3-bb12-8bdfa10139cb
	I0910 19:57:13.673141    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:14.162355    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:14.162418    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:14.162418    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:14.162481    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:14.168551    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:57:14.168551    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:14.168551    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:14 GMT
	I0910 19:57:14.168551    8968 round_trippers.go:580]     Audit-Id: 87a5c399-d481-421b-a451-f9393e4281d9
	I0910 19:57:14.168551    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:14.168551    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:14.168551    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:14.168551    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:14.169224    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:14.169224    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:14.169224    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:14.169224    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:14.169224    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:14.172551    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:14.173230    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:14.173230    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:14 GMT
	I0910 19:57:14.173230    8968 round_trippers.go:580]     Audit-Id: 85cecbff-3a79-49a8-a15a-cfa61c56e91b
	I0910 19:57:14.173230    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:14.173230    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:14.173230    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:14.173230    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:14.173514    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:14.173920    8968 pod_ready.go:103] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"False"
	I0910 19:57:14.661247    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:14.661247    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:14.661247    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:14.661247    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:14.665282    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:14.665282    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:14.665282    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:14 GMT
	I0910 19:57:14.665282    8968 round_trippers.go:580]     Audit-Id: 76664fc4-4d08-4238-b02a-890a384187fd
	I0910 19:57:14.665282    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:14.665282    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:14.665282    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:14.665282    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:14.665282    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:14.666689    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:14.666774    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:14.666774    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:14.666774    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:14.670081    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:14.670081    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:14.670081    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:14.670456    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:14.670456    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:14 GMT
	I0910 19:57:14.670456    8968 round_trippers.go:580]     Audit-Id: 0497702e-96e6-429e-84f0-4c2bd31b6fea
	I0910 19:57:14.670456    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:14.670456    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:14.670740    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:15.164938    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:15.165004    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:15.165004    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:15.165072    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:15.168474    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:15.168474    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:15.168474    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:15.168474    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:15.168474    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:15 GMT
	I0910 19:57:15.168474    8968 round_trippers.go:580]     Audit-Id: 72518ddd-7e61-4fee-acd2-2c0a18c5cdb0
	I0910 19:57:15.168474    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:15.168474    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:15.169129    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:15.169685    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:15.169685    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:15.169768    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:15.169768    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:15.172919    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:15.172919    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:15.172919    8968 round_trippers.go:580]     Audit-Id: 48411778-c64f-47d6-ae5a-3d02f348b8f9
	I0910 19:57:15.172919    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:15.172919    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:15.172919    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:15.172919    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:15.172919    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:15 GMT
	I0910 19:57:15.173502    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:15.662479    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:15.662479    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:15.662479    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:15.662479    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:15.666282    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:15.666282    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:15.666282    8968 round_trippers.go:580]     Audit-Id: 659ea1bb-c566-4acc-a194-d9437c3936c2
	I0910 19:57:15.666282    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:15.666282    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:15.666282    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:15.666282    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:15.666282    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:15 GMT
	I0910 19:57:15.666282    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:15.667164    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:15.667164    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:15.667164    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:15.667164    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:15.669945    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 19:57:15.669945    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:15.669945    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:15.669945    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:15 GMT
	I0910 19:57:15.669945    8968 round_trippers.go:580]     Audit-Id: 1f35f206-aea0-47c8-93ca-5a5d87ac5871
	I0910 19:57:15.669945    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:15.669945    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:15.669945    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:15.669945    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:16.160657    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:16.160720    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:16.160720    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:16.160720    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:16.164050    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:16.164050    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:16.164535    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:16.164535    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:16.164535    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:16.164535    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:16 GMT
	I0910 19:57:16.164535    8968 round_trippers.go:580]     Audit-Id: 586fad75-2a87-4a46-93d9-1ffee52298d8
	I0910 19:57:16.164535    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:16.164987    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:16.166010    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:16.166010    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:16.166010    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:16.166010    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:16.168580    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:16.168580    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:16.168580    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:16 GMT
	I0910 19:57:16.168580    8968 round_trippers.go:580]     Audit-Id: 6f13298f-932b-464f-b109-2d16630fae59
	I0910 19:57:16.168580    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:16.168580    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:16.168580    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:16.168580    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:16.169459    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:16.665678    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:16.665758    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:16.665758    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:16.665758    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:16.669530    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:16.669530    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:16.669530    8968 round_trippers.go:580]     Audit-Id: 72b94d77-2045-4bcb-b34c-909dca1615a1
	I0910 19:57:16.669530    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:16.669530    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:16.669530    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:16.669530    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:16.669750    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:16 GMT
	I0910 19:57:16.669972    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:16.671100    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:16.671156    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:16.671209    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:16.671209    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:16.673558    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:16.673558    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:16.673558    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:16.673558    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:16.674516    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:16.674516    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:16.674516    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:16 GMT
	I0910 19:57:16.674516    8968 round_trippers.go:580]     Audit-Id: 34c8bae0-36d4-49c0-8aa5-4888063bf09d
	I0910 19:57:16.674813    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:16.675047    8968 pod_ready.go:103] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"False"
	I0910 19:57:17.164482    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:17.164552    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:17.164552    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:17.164602    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:17.168302    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:17.168358    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:17.168358    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:17.168358    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:17.168358    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:17.168358    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:17.168358    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:17 GMT
	I0910 19:57:17.168358    8968 round_trippers.go:580]     Audit-Id: 60b4c75b-c960-4c7f-aecf-8e44f5e67bef
	I0910 19:57:17.168358    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:17.169135    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:17.169135    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:17.169135    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:17.169135    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:17.171381    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:17.172289    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:17.172289    8968 round_trippers.go:580]     Audit-Id: 88fd094a-3431-43df-a677-132d108b26d3
	I0910 19:57:17.172289    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:17.172289    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:17.172289    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:17.172289    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:17.172289    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:17 GMT
	I0910 19:57:17.172559    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:17.661204    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:17.661328    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:17.661328    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:17.661328    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:17.664780    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:17.664780    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:17.664780    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:17.664780    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:17.664780    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:17 GMT
	I0910 19:57:17.665397    8968 round_trippers.go:580]     Audit-Id: a6b55a40-d22e-4f78-8e4f-6c672597c645
	I0910 19:57:17.665397    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:17.665452    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:17.665646    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:17.666722    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:17.666722    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:17.666805    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:17.666805    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:17.669747    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:17.669747    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:17.669854    8968 round_trippers.go:580]     Audit-Id: 1ad5007e-482d-4d17-b8de-55e8f0a58b90
	I0910 19:57:17.669854    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:17.669854    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:17.669854    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:17.669944    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:17.669944    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:17 GMT
	I0910 19:57:17.670372    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:18.160359    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:18.160431    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:18.160511    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:18.160511    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:18.165990    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:57:18.165990    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:18.165990    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:18.165990    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:18.165990    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:18 GMT
	I0910 19:57:18.165990    8968 round_trippers.go:580]     Audit-Id: ddbb3f3c-1c4f-47ca-840f-02e9aff59517
	I0910 19:57:18.165990    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:18.165990    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:18.166524    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:18.167502    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:18.167502    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:18.167502    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:18.167502    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:18.170421    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:18.170421    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:18.170421    8968 round_trippers.go:580]     Audit-Id: 0c6fc69a-90e3-45b4-8332-1685d4d1b3ca
	I0910 19:57:18.170421    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:18.170421    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:18.170421    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:18.170421    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:18.170421    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:18 GMT
	I0910 19:57:18.170421    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:18.665949    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:18.666011    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:18.666011    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:18.666075    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:18.669374    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:18.670040    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:18.670040    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:18.670040    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:18.670130    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:18 GMT
	I0910 19:57:18.670130    8968 round_trippers.go:580]     Audit-Id: ec22695c-1bf4-416e-a880-e3fa01eff647
	I0910 19:57:18.670130    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:18.670130    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:18.670397    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:18.671541    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:18.671541    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:18.671541    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:18.671541    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:18.676142    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:18.676142    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:18.676142    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:18.676142    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:18.676142    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:18.676142    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:18 GMT
	I0910 19:57:18.676142    8968 round_trippers.go:580]     Audit-Id: 718e92c7-6101-4a00-8756-b807dc5c5b92
	I0910 19:57:18.676142    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:18.676940    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:18.677362    8968 pod_ready.go:103] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"False"
	I0910 19:57:19.160212    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:19.160212    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:19.160212    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:19.160212    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:19.164603    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:19.164603    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:19.164603    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:19.164765    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:19 GMT
	I0910 19:57:19.164765    8968 round_trippers.go:580]     Audit-Id: d5e55e1c-ba66-40ec-a617-e99d391f6b00
	I0910 19:57:19.164765    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:19.164765    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:19.164830    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:19.165018    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:19.165613    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:19.165613    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:19.166133    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:19.166171    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:19.170431    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:19.170520    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:19.170520    8968 round_trippers.go:580]     Audit-Id: 2259e8fe-3ad6-46db-a019-471e10feb1cb
	I0910 19:57:19.170571    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:19.170571    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:19.170571    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:19.170571    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:19.170571    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:19 GMT
	I0910 19:57:19.171296    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:19.664292    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:19.664376    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:19.664376    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:19.664376    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:19.667717    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:19.668060    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:19.668060    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:19.668060    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:19 GMT
	I0910 19:57:19.668060    8968 round_trippers.go:580]     Audit-Id: 7807eb53-0060-4480-af0f-a4c6dd506464
	I0910 19:57:19.668164    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:19.668164    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:19.668164    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:19.668496    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:19.669601    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:19.669693    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:19.669693    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:19.669693    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:19.672355    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:19.672982    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:19.672982    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:19 GMT
	I0910 19:57:19.672982    8968 round_trippers.go:580]     Audit-Id: ffa823b5-d983-458d-8007-dd7f36c9a720
	I0910 19:57:19.672982    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:19.672982    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:19.672982    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:19.672982    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:19.673228    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:20.160267    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:20.160267    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.160639    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.160639    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.164336    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:20.164414    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.164414    8968 round_trippers.go:580]     Audit-Id: 01415253-c07e-4681-b411-7ce2bf200b70
	I0910 19:57:20.164414    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.164504    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.164504    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.164504    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.164504    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.165337    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1803","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7046 chars]
	I0910 19:57:20.166491    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:20.166491    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.166491    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.166491    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.172394    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:57:20.172394    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.172394    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.172394    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.172394    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.172394    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.172394    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.172394    8968 round_trippers.go:580]     Audit-Id: fd3ff111-50f0-46a9-8c02-612911e99e43
	I0910 19:57:20.173340    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:20.173340    8968 pod_ready.go:93] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"True"
	I0910 19:57:20.173340    8968 pod_ready.go:82] duration metric: took 17.5212573s for pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.173340    8968 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.174024    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-629100
	I0910 19:57:20.174024    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.174024    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.174092    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.177005    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:20.177005    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.177005    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.177005    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.177005    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.177005    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.177005    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.177005    8968 round_trippers.go:580]     Audit-Id: d50cb178-a2bd-4292-b612-78040450e7b2
	I0910 19:57:20.177005    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-629100","namespace":"kube-system","uid":"2e6b0a6d-5705-427c-89e2-ff2b3ca25cb6","resourceVersion":"1766","creationTimestamp":"2024-09-10T19:56:47Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.215.172:2379","kubernetes.io/config.hash":"fe5e551808132ccc0e55277539c9741f","kubernetes.io/config.mirror":"fe5e551808132ccc0e55277539c9741f","kubernetes.io/config.seen":"2024-09-10T19:56:41.600229288Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:56:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6617 chars]
	I0910 19:57:20.178018    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:20.178747    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.178777    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.178891    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.181155    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:20.181155    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.181155    8968 round_trippers.go:580]     Audit-Id: 75b7c548-e5fc-4567-a30e-06fc802ce4f7
	I0910 19:57:20.181155    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.181155    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.181155    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.181155    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.181155    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.182015    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:20.182406    8968 pod_ready.go:93] pod "etcd-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:57:20.182470    8968 pod_ready.go:82] duration metric: took 9.0651ms for pod "etcd-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.182470    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.182536    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-629100
	I0910 19:57:20.182594    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.182594    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.182594    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.184992    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:20.184992    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.184992    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.184992    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.184992    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.184992    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.184992    8968 round_trippers.go:580]     Audit-Id: ecaf97db-b10e-47bf-9a0c-3731632e7426
	I0910 19:57:20.184992    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.185824    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-629100","namespace":"kube-system","uid":"5d262302-6bdf-4e67-bf9f-4b50d8f9fbd5","resourceVersion":"1763","creationTimestamp":"2024-09-10T19:56:47Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.215.172:8443","kubernetes.io/config.hash":"0a34f2a2bc072815a118e734662d14b6","kubernetes.io/config.mirror":"0a34f2a2bc072815a118e734662d14b6","kubernetes.io/config.seen":"2024-09-10T19:56:41.591483300Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:56:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8049 chars]
	I0910 19:57:20.185824    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:20.185824    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.185824    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.185824    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.189085    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:20.189085    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.189085    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.189085    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.189085    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.189085    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.189085    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.189085    8968 round_trippers.go:580]     Audit-Id: d8962aca-72f0-4df9-b96f-db48a0c013c4
	I0910 19:57:20.189641    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:20.189641    8968 pod_ready.go:93] pod "kube-apiserver-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:57:20.189641    8968 pod_ready.go:82] duration metric: took 7.1703ms for pod "kube-apiserver-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.189641    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.189641    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-629100
	I0910 19:57:20.189641    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.189641    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.189641    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.192216    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:20.192216    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.192216    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.192216    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.192216    8968 round_trippers.go:580]     Audit-Id: d8b87831-f303-4edb-b67f-b325bdc8bc73
	I0910 19:57:20.192216    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.192216    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.192216    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.193205    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-629100","namespace":"kube-system","uid":"60adcb0c-808a-477c-9432-83dc8f96d6c0","resourceVersion":"1770","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2a7614069b6cbf30e1abf0d1e34a8a71","kubernetes.io/config.mirror":"2a7614069b6cbf30e1abf0d1e34a8a71","kubernetes.io/config.seen":"2024-09-10T19:35:40.972009282Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0910 19:57:20.193205    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:20.193205    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.193205    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.193205    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.196188    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:20.196188    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.196188    8968 round_trippers.go:580]     Audit-Id: b8c6e9a1-5de8-43d4-aece-bfab104a8708
	I0910 19:57:20.196188    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.196188    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.196188    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.196188    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.196188    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.196396    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:20.196837    8968 pod_ready.go:93] pod "kube-controller-manager-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:57:20.196837    8968 pod_ready.go:82] duration metric: took 7.1955ms for pod "kube-controller-manager-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.196837    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4tzx6" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.196974    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tzx6
	I0910 19:57:20.196974    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.196974    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.196974    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.199389    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:20.199698    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.199724    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.199724    8968 round_trippers.go:580]     Audit-Id: 7b6102e3-a58f-4273-be13-bfe7f8612ec9
	I0910 19:57:20.199724    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.199724    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.199759    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.199759    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.199916    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4tzx6","generateName":"kube-proxy-","namespace":"kube-system","uid":"9bb18c28-3ee9-4028-a61d-3d7f6ea31894","resourceVersion":"1613","creationTimestamp":"2024-09-10T19:42:55Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:42:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6433 chars]
	I0910 19:57:20.200419    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 19:57:20.200419    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.200419    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.200419    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.202800    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:20.202849    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.202849    8968 round_trippers.go:580]     Audit-Id: 66d20dd7-4e80-4b48-9b6b-b7e41d8b1b3a
	I0910 19:57:20.202849    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.202849    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.202849    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.202849    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.202849    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.203082    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"39f4665c-e276-4967-a647-46b3a9a1c67a","resourceVersion":"1764","creationTimestamp":"2024-09-10T19:52:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_52_30_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4394 chars]
	I0910 19:57:20.203082    8968 pod_ready.go:98] node "multinode-629100-m03" hosting pod "kube-proxy-4tzx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100-m03" has status "Ready":"Unknown"
	I0910 19:57:20.203082    8968 pod_ready.go:82] duration metric: took 6.245ms for pod "kube-proxy-4tzx6" in "kube-system" namespace to be "Ready" ...
	E0910 19:57:20.203082    8968 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-629100-m03" hosting pod "kube-proxy-4tzx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100-m03" has status "Ready":"Unknown"
	I0910 19:57:20.203082    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qqrrg" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.362542    8968 request.go:632] Waited for 159.2048ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqrrg
	I0910 19:57:20.362692    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqrrg
	I0910 19:57:20.362692    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.362747    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.362774    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.366813    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:20.366813    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.366889    8968 round_trippers.go:580]     Audit-Id: cf198a53-b24f-4fc8-ac4b-f49346652a41
	I0910 19:57:20.366889    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.366889    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.366889    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.366889    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.366954    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.367975    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qqrrg","generateName":"kube-proxy-","namespace":"kube-system","uid":"1fc7fdda-d5e4-4c72-96c1-2348eb72b491","resourceVersion":"580","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0910 19:57:20.567891    8968 request.go:632] Waited for 198.8872ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:57:20.567891    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:57:20.567891    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.567891    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.567891    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.570555    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:20.571539    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.571567    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.571567    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.571567    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.571567    8968 round_trippers.go:580]     Audit-Id: 06f30ea1-31b7-4eaa-b53e-b6b962fe11bd
	I0910 19:57:20.571567    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.571567    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.571911    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"1311","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3819 chars]
	I0910 19:57:20.572510    8968 pod_ready.go:93] pod "kube-proxy-qqrrg" in "kube-system" namespace has status "Ready":"True"
	I0910 19:57:20.572510    8968 pod_ready.go:82] duration metric: took 369.4036ms for pod "kube-proxy-qqrrg" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.572607    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wqf2d" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.771909    8968 request.go:632] Waited for 199.2892ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wqf2d
	I0910 19:57:20.772211    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wqf2d
	I0910 19:57:20.772512    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.772512    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.772512    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.775824    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:20.776294    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.776379    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.776379    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.776379    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.776379    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:21 GMT
	I0910 19:57:20.776379    8968 round_trippers.go:580]     Audit-Id: 15c46eb1-3e13-49e7-bc7c-cb609a3758cf
	I0910 19:57:20.776379    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.776680    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wqf2d","generateName":"kube-proxy-","namespace":"kube-system","uid":"27e7846e-e506-48e0-96b9-351b3ca91703","resourceVersion":"1662","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6405 chars]
	I0910 19:57:20.975272    8968 request.go:632] Waited for 197.5848ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:20.975467    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:20.975467    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.975467    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.975467    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.979079    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:20.979079    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.979154    8968 round_trippers.go:580]     Audit-Id: 011253b1-ac7c-40b3-9f25-ea7150df599c
	I0910 19:57:20.979154    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.979154    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.979154    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.979154    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.979212    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:21 GMT
	I0910 19:57:20.979484    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:20.980118    8968 pod_ready.go:93] pod "kube-proxy-wqf2d" in "kube-system" namespace has status "Ready":"True"
	I0910 19:57:20.980118    8968 pod_ready.go:82] duration metric: took 407.4843ms for pod "kube-proxy-wqf2d" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.980118    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:21.162025    8968 request.go:632] Waited for 181.7709ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629100
	I0910 19:57:21.162199    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629100
	I0910 19:57:21.162199    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:21.162199    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:21.162199    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:21.167623    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:57:21.167623    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:21.167623    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:21.167623    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:21.167623    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:21 GMT
	I0910 19:57:21.167623    8968 round_trippers.go:580]     Audit-Id: 01323a6e-ef70-4169-af83-e48291e93d51
	I0910 19:57:21.167623    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:21.167623    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:21.168738    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-629100","namespace":"kube-system","uid":"fe93d1a3-98c4-44c9-b1fb-9d90a97b97c7","resourceVersion":"1757","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4f54b0774708cd0189f8ef861ea931f0","kubernetes.io/config.mirror":"4f54b0774708cd0189f8ef861ea931f0","kubernetes.io/config.seen":"2024-09-10T19:35:40.972010383Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0910 19:57:21.363555    8968 request.go:632] Waited for 194.041ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:21.363555    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:21.363555    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:21.363555    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:21.363555    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:21.367135    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:21.367135    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:21.367135    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:21.367135    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:21 GMT
	I0910 19:57:21.367135    8968 round_trippers.go:580]     Audit-Id: 4b7f7949-a52a-45af-a74b-3983206971f2
	I0910 19:57:21.367978    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:21.368079    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:21.368079    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:21.368513    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:21.368653    8968 pod_ready.go:93] pod "kube-scheduler-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:57:21.368653    8968 pod_ready.go:82] duration metric: took 388.5092ms for pod "kube-scheduler-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:21.368653    8968 pod_ready.go:39] duration metric: took 18.7277589s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:57:21.369216    8968 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:57:21.380454    8968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:57:21.404406    8968 command_runner.go:130] > 1954
	I0910 19:57:21.404406    8968 api_server.go:72] duration metric: took 30.5462048s to wait for apiserver process to appear ...
	I0910 19:57:21.404539    8968 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:57:21.404539    8968 api_server.go:253] Checking apiserver healthz at https://172.31.215.172:8443/healthz ...
	I0910 19:57:21.413739    8968 api_server.go:279] https://172.31.215.172:8443/healthz returned 200:
	ok
	I0910 19:57:21.414104    8968 round_trippers.go:463] GET https://172.31.215.172:8443/version
	I0910 19:57:21.414174    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:21.414174    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:21.414174    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:21.418082    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:21.418129    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:21.418129    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:21.418129    8968 round_trippers.go:580]     Content-Length: 263
	I0910 19:57:21.418129    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:21 GMT
	I0910 19:57:21.418291    8968 round_trippers.go:580]     Audit-Id: c1b3837d-54aa-45fc-8e3c-fba7b5325f45
	I0910 19:57:21.418291    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:21.418291    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:21.418328    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:21.418508    8968 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.0",
	  "gitCommit": "9edcffcde5595e8a5b1a35f88c421764e575afce",
	  "gitTreeState": "clean",
	  "buildDate": "2024-08-13T07:28:49Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0910 19:57:21.418508    8968 api_server.go:141] control plane version: v1.31.0
	I0910 19:57:21.418508    8968 api_server.go:131] duration metric: took 13.9686ms to wait for apiserver health ...
	I0910 19:57:21.418508    8968 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:57:21.569797    8968 request.go:632] Waited for 151.2785ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods
	I0910 19:57:21.569957    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods
	I0910 19:57:21.569957    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:21.569957    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:21.569957    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:21.574584    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:21.575592    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:21.575592    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:21 GMT
	I0910 19:57:21.575592    8968 round_trippers.go:580]     Audit-Id: f839a8a2-8f8c-4876-bcf1-73dd47b37852
	I0910 19:57:21.575592    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:21.575592    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:21.575592    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:21.575592    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:21.577629    8968 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1807"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1803","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89970 chars]
	I0910 19:57:21.581502    8968 system_pods.go:59] 12 kube-system pods found
	I0910 19:57:21.581502    8968 system_pods.go:61] "coredns-6f6b679f8f-srtv8" [76dd899a-75f4-497d-a6a9-6b263f3a379d] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "etcd-multinode-629100" [2e6b0a6d-5705-427c-89e2-ff2b3ca25cb6] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "kindnet-5crht" [d569a3a6-5b06-4adf-9ac0-294274923906] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "kindnet-6tdpv" [2c45f0f2-5d24-4ec2-8e6b-06923ea85e78] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "kindnet-lj2v2" [6643175a-32c5-4461-a441-08b9ac9ba98a] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "kube-apiserver-multinode-629100" [5d262302-6bdf-4e67-bf9f-4b50d8f9fbd5] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "kube-controller-manager-multinode-629100" [60adcb0c-808a-477c-9432-83dc8f96d6c0] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "kube-proxy-4tzx6" [9bb18c28-3ee9-4028-a61d-3d7f6ea31894] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "kube-proxy-qqrrg" [1fc7fdda-d5e4-4c72-96c1-2348eb72b491] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "kube-proxy-wqf2d" [27e7846e-e506-48e0-96b9-351b3ca91703] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "kube-scheduler-multinode-629100" [fe93d1a3-98c4-44c9-b1fb-9d90a97b97c7] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "storage-provisioner" [511e6e52-fc2a-4562-9903-42ee1f2e0a2d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0910 19:57:21.581502    8968 system_pods.go:74] duration metric: took 162.9834ms to wait for pod list to return data ...
	I0910 19:57:21.581502    8968 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:57:21.774841    8968 request.go:632] Waited for 193.3257ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/default/serviceaccounts
	I0910 19:57:21.775075    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/default/serviceaccounts
	I0910 19:57:21.775075    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:21.775075    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:21.775075    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:21.777662    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:21.778707    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:21.778707    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:21.778707    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:21.778707    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:21.778707    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:21.778707    8968 round_trippers.go:580]     Content-Length: 262
	I0910 19:57:21.778707    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:22 GMT
	I0910 19:57:21.778707    8968 round_trippers.go:580]     Audit-Id: 964d94c8-4b92-4126-b01b-0a91d4cabb7a
	I0910 19:57:21.778804    8968 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1807"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"5ec55b5c-25b1-4463-9e6c-90f1cae6d2f9","resourceVersion":"302","creationTimestamp":"2024-09-10T19:35:46Z"}}]}
	I0910 19:57:21.779276    8968 default_sa.go:45] found service account: "default"
	I0910 19:57:21.779276    8968 default_sa.go:55] duration metric: took 197.7608ms for default service account to be created ...
	I0910 19:57:21.779360    8968 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 19:57:21.962162    8968 request.go:632] Waited for 182.3324ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods
	I0910 19:57:21.962162    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods
	I0910 19:57:21.962162    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:21.962162    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:21.962162    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:21.966067    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:21.966067    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:21.966067    8968 round_trippers.go:580]     Audit-Id: 0423efcc-f10a-4002-9e37-e2a458baa7a9
	I0910 19:57:21.966067    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:21.966067    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:21.966067    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:21.966067    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:21.966067    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:22 GMT
	I0910 19:57:21.968299    8968 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1807"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1803","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89970 chars]
	I0910 19:57:21.974911    8968 system_pods.go:86] 12 kube-system pods found
	I0910 19:57:21.974911    8968 system_pods.go:89] "coredns-6f6b679f8f-srtv8" [76dd899a-75f4-497d-a6a9-6b263f3a379d] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "etcd-multinode-629100" [2e6b0a6d-5705-427c-89e2-ff2b3ca25cb6] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "kindnet-5crht" [d569a3a6-5b06-4adf-9ac0-294274923906] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "kindnet-6tdpv" [2c45f0f2-5d24-4ec2-8e6b-06923ea85e78] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "kindnet-lj2v2" [6643175a-32c5-4461-a441-08b9ac9ba98a] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "kube-apiserver-multinode-629100" [5d262302-6bdf-4e67-bf9f-4b50d8f9fbd5] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "kube-controller-manager-multinode-629100" [60adcb0c-808a-477c-9432-83dc8f96d6c0] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "kube-proxy-4tzx6" [9bb18c28-3ee9-4028-a61d-3d7f6ea31894] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "kube-proxy-qqrrg" [1fc7fdda-d5e4-4c72-96c1-2348eb72b491] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "kube-proxy-wqf2d" [27e7846e-e506-48e0-96b9-351b3ca91703] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "kube-scheduler-multinode-629100" [fe93d1a3-98c4-44c9-b1fb-9d90a97b97c7] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "storage-provisioner" [511e6e52-fc2a-4562-9903-42ee1f2e0a2d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0910 19:57:21.974911    8968 system_pods.go:126] duration metric: took 195.5383ms to wait for k8s-apps to be running ...
	I0910 19:57:21.974911    8968 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:57:21.985280    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:57:22.010423    8968 system_svc.go:56] duration metric: took 35.5093ms WaitForService to wait for kubelet
	I0910 19:57:22.010423    8968 kubeadm.go:582] duration metric: took 31.1521818s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:57:22.010423    8968 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:57:22.163093    8968 request.go:632] Waited for 151.9344ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes
	I0910 19:57:22.163173    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes
	I0910 19:57:22.163242    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:22.163242    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:22.163242    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:22.169715    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:57:22.169715    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:22.169715    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:22.169715    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:22.169715    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:22 GMT
	I0910 19:57:22.169715    8968 round_trippers.go:580]     Audit-Id: a98a44fe-1b51-46f9-ac76-62ddd76c5b45
	I0910 19:57:22.169715    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:22.169715    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:22.169715    8968 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1807"},"items":[{"metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15482 chars]
	I0910 19:57:22.171354    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:57:22.171411    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 19:57:22.171438    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:57:22.171438    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 19:57:22.171438    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:57:22.171438    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 19:57:22.171438    8968 node_conditions.go:105] duration metric: took 161.0048ms to run NodePressure ...
	I0910 19:57:22.171498    8968 start.go:241] waiting for startup goroutines ...
	I0910 19:57:22.171498    8968 start.go:246] waiting for cluster config update ...
	I0910 19:57:22.171498    8968 start.go:255] writing updated cluster config ...
	I0910 19:57:22.175344    8968 out.go:201] 
	I0910 19:57:22.178396    8968 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:57:22.191223    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:57:22.191223    8968 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json ...
	I0910 19:57:22.198746    8968 out.go:177] * Starting "multinode-629100-m02" worker node in "multinode-629100" cluster
	I0910 19:57:22.200146    8968 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 19:57:22.201144    8968 cache.go:56] Caching tarball of preloaded images
	I0910 19:57:22.201144    8968 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0910 19:57:22.201144    8968 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 19:57:22.201144    8968 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json ...
	I0910 19:57:22.202709    8968 start.go:360] acquireMachinesLock for multinode-629100-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 19:57:22.203735    8968 start.go:364] duration metric: took 1.0266ms to acquireMachinesLock for "multinode-629100-m02"
	I0910 19:57:22.203956    8968 start.go:96] Skipping create...Using existing machine configuration
	I0910 19:57:22.203956    8968 fix.go:54] fixHost starting: m02
	I0910 19:57:22.204124    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:57:24.077130    8968 main.go:141] libmachine: [stdout =====>] : Off
	
	I0910 19:57:24.077130    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:24.077130    8968 fix.go:112] recreateIfNeeded on multinode-629100-m02: state=Stopped err=<nil>
	W0910 19:57:24.077130    8968 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 19:57:24.082968    8968 out.go:177] * Restarting existing hyperv VM for "multinode-629100-m02" ...
	I0910 19:57:24.084630    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-629100-m02
	I0910 19:57:26.822921    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:57:26.822921    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:26.823025    8968 main.go:141] libmachine: Waiting for host to start...
	I0910 19:57:26.823060    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:57:28.791060    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:57:28.791132    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:28.791202    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:57:31.067546    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:57:31.067771    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:32.082153    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:57:34.021730    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:57:34.021730    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:34.022792    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:57:36.224642    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:57:36.224642    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:37.232461    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:57:39.147450    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:57:39.147532    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:39.147618    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:57:41.331839    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:57:41.331839    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:42.339860    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:57:44.316685    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:57:44.317302    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:44.317369    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:57:46.542833    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:57:46.542833    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:47.557305    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:57:49.513438    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:57:49.513750    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:49.513750    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:57:51.782750    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:57:51.782750    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:51.784941    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:57:53.687447    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:57:53.687447    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:53.688156    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:57:55.961266    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:57:55.961266    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:55.962278    8968 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json ...
	I0910 19:57:55.964406    8968 machine.go:93] provisionDockerMachine start ...
	I0910 19:57:55.964489    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:57:57.793217    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:57:57.793217    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:57.793856    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:00.045842    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:00.045842    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:00.049608    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:58:00.050121    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.34 22 <nil> <nil>}
	I0910 19:58:00.050121    8968 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 19:58:00.183996    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 19:58:00.184076    8968 buildroot.go:166] provisioning hostname "multinode-629100-m02"
	I0910 19:58:00.184153    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:02.087899    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:02.088894    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:02.089102    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:04.364340    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:04.364340    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:04.369814    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:58:04.370464    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.34 22 <nil> <nil>}
	I0910 19:58:04.370464    8968 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-629100-m02 && echo "multinode-629100-m02" | sudo tee /etc/hostname
	I0910 19:58:04.527401    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-629100-m02
	
	I0910 19:58:04.527589    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:06.390253    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:06.390253    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:06.391350    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:08.643044    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:08.643044    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:08.647016    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:58:08.647661    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.34 22 <nil> <nil>}
	I0910 19:58:08.647661    8968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-629100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-629100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-629100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 19:58:08.804525    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 19:58:08.804525    8968 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0910 19:58:08.804525    8968 buildroot.go:174] setting up certificates
	I0910 19:58:08.804525    8968 provision.go:84] configureAuth start
	I0910 19:58:08.804525    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:10.661240    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:10.661240    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:10.662128    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:12.884507    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:12.885091    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:12.885091    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:14.754237    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:14.754237    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:14.754237    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:16.951876    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:16.951957    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:16.951957    8968 provision.go:143] copyHostCerts
	I0910 19:58:16.952205    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0910 19:58:16.952509    8968 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0910 19:58:16.952509    8968 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0910 19:58:16.952982    8968 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0910 19:58:16.954244    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0910 19:58:16.954508    8968 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0910 19:58:16.954508    8968 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0910 19:58:16.954912    8968 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0910 19:58:16.956137    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0910 19:58:16.956397    8968 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0910 19:58:16.956465    8968 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0910 19:58:16.956769    8968 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0910 19:58:16.957536    8968 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-629100-m02 san=[127.0.0.1 172.31.210.34 localhost minikube multinode-629100-m02]
	I0910 19:58:17.028662    8968 provision.go:177] copyRemoteCerts
	I0910 19:58:17.036800    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 19:58:17.036800    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:18.910091    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:18.910329    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:18.910402    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:21.144201    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:21.144201    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:21.144567    8968 sshutil.go:53] new ssh client: &{IP:172.31.210.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\id_rsa Username:docker}
	I0910 19:58:21.254579    8968 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2175016s)
	I0910 19:58:21.254579    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0910 19:58:21.255584    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 19:58:21.300544    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0910 19:58:21.300947    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0910 19:58:21.344116    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0910 19:58:21.344116    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 19:58:21.383939    8968 provision.go:87] duration metric: took 12.5785846s to configureAuth
	I0910 19:58:21.383939    8968 buildroot.go:189] setting minikube options for container-runtime
	I0910 19:58:21.385078    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:58:21.385309    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:23.273912    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:23.274133    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:23.274216    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:25.499488    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:25.499488    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:25.503783    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:58:25.503993    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.34 22 <nil> <nil>}
	I0910 19:58:25.503993    8968 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 19:58:25.646704    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 19:58:25.646802    8968 buildroot.go:70] root file system type: tmpfs
	I0910 19:58:25.646924    8968 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 19:58:25.647033    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:27.480439    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:27.480736    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:27.480830    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:29.716955    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:29.716955    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:29.721049    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:58:29.721803    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.34 22 <nil> <nil>}
	I0910 19:58:29.721803    8968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.31.215.172"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 19:58:29.883342    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.31.215.172
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 19:58:29.883342    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:31.758273    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:31.758273    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:31.758273    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:33.990420    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:33.990420    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:33.994608    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:58:33.995058    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.34 22 <nil> <nil>}
	I0910 19:58:33.995058    8968 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 19:58:36.327414    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 19:58:36.327414    8968 machine.go:96] duration metric: took 40.3603513s to provisionDockerMachine
	I0910 19:58:36.327953    8968 start.go:293] postStartSetup for "multinode-629100-m02" (driver="hyperv")
	I0910 19:58:36.327989    8968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 19:58:36.337253    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 19:58:36.337253    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:38.192917    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:38.193589    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:38.193589    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:40.448270    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:40.448270    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:40.449281    8968 sshutil.go:53] new ssh client: &{IP:172.31.210.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\id_rsa Username:docker}
	I0910 19:58:40.560841    8968 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2233089s)
	I0910 19:58:40.574781    8968 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 19:58:40.581280    8968 command_runner.go:130] > NAME=Buildroot
	I0910 19:58:40.581390    8968 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0910 19:58:40.581390    8968 command_runner.go:130] > ID=buildroot
	I0910 19:58:40.581390    8968 command_runner.go:130] > VERSION_ID=2023.02.9
	I0910 19:58:40.581390    8968 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0910 19:58:40.581466    8968 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 19:58:40.581494    8968 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0910 19:58:40.581709    8968 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0910 19:58:40.581818    8968 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> 47242.pem in /etc/ssl/certs
	I0910 19:58:40.582347    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /etc/ssl/certs/47242.pem
	I0910 19:58:40.590537    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 19:58:40.609201    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /etc/ssl/certs/47242.pem (1708 bytes)
	I0910 19:58:40.654592    8968 start.go:296] duration metric: took 4.3263177s for postStartSetup
	I0910 19:58:40.654673    8968 fix.go:56] duration metric: took 1m18.4455612s for fixHost
	I0910 19:58:40.654673    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:42.517729    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:42.517729    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:42.517808    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:44.733070    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:44.733070    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:44.736859    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:58:44.737520    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.34 22 <nil> <nil>}
	I0910 19:58:44.737520    8968 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 19:58:44.866431    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725998325.065080015
	
	I0910 19:58:44.866494    8968 fix.go:216] guest clock: 1725998325.065080015
	I0910 19:58:44.866554    8968 fix.go:229] Guest: 2024-09-10 19:58:45.065080015 +0000 UTC Remote: 2024-09-10 19:58:40.6546731 +0000 UTC m=+229.363864501 (delta=4.410406915s)
	I0910 19:58:44.866616    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:46.705155    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:46.705155    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:46.705155    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:48.950966    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:48.950966    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:48.954845    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:58:48.955435    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.34 22 <nil> <nil>}
	I0910 19:58:48.955435    8968 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1725998324
	I0910 19:58:49.104380    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Sep 10 19:58:44 UTC 2024
	
	I0910 19:58:49.104380    8968 fix.go:236] clock set: Tue Sep 10 19:58:44 UTC 2024
	 (err=<nil>)
	I0910 19:58:49.104380    8968 start.go:83] releasing machines lock for "multinode-629100-m02", held for 1m26.8948577s
	I0910 19:58:49.104380    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:50.952170    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:50.952170    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:50.952493    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:53.188195    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:53.188195    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:53.191677    8968 out.go:177] * Found network options:
	I0910 19:58:53.194159    8968 out.go:177]   - NO_PROXY=172.31.215.172
	W0910 19:58:53.196662    8968 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 19:58:53.199651    8968 out.go:177]   - NO_PROXY=172.31.215.172
	W0910 19:58:53.202566    8968 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 19:58:53.204129    8968 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 19:58:53.205815    8968 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0910 19:58:53.206007    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:53.213927    8968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0910 19:58:53.213927    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:55.107993    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:55.108695    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:55.108830    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:55.130420    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:55.131430    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:55.131490    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:57.413249    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:57.413249    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:57.413558    8968 sshutil.go:53] new ssh client: &{IP:172.31.210.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\id_rsa Username:docker}
	I0910 19:58:57.436069    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:57.436069    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:57.436484    8968 sshutil.go:53] new ssh client: &{IP:172.31.210.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\id_rsa Username:docker}
	I0910 19:58:57.515254    8968 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0910 19:58:57.515353    8968 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.309149s)
	W0910 19:58:57.515444    8968 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0910 19:58:57.532343    8968 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0910 19:58:57.532856    8968 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.3186431s)
	W0910 19:58:57.532856    8968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 19:58:57.541585    8968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 19:58:57.567812    8968 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0910 19:58:57.567812    8968 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 19:58:57.568008    8968 start.go:495] detecting cgroup driver to use...
	I0910 19:58:57.568185    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 19:58:57.602759    8968 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0910 19:58:57.611976    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 19:58:57.637801    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0910 19:58:57.638881    8968 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0910 19:58:57.638881    8968 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0910 19:58:57.657829    8968 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 19:58:57.667103    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 19:58:57.694512    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 19:58:57.722238    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 19:58:57.749803    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 19:58:57.777151    8968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 19:58:57.804267    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 19:58:57.829298    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 19:58:57.856113    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 19:58:57.882607    8968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 19:58:57.898739    8968 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0910 19:58:57.906942    8968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 19:58:57.933108    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:58:58.096983    8968 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 19:58:58.124330    8968 start.go:495] detecting cgroup driver to use...
	I0910 19:58:58.133420    8968 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 19:58:58.154953    8968 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0910 19:58:58.155000    8968 command_runner.go:130] > [Unit]
	I0910 19:58:58.155000    8968 command_runner.go:130] > Description=Docker Application Container Engine
	I0910 19:58:58.155000    8968 command_runner.go:130] > Documentation=https://docs.docker.com
	I0910 19:58:58.155000    8968 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0910 19:58:58.155093    8968 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0910 19:58:58.155093    8968 command_runner.go:130] > StartLimitBurst=3
	I0910 19:58:58.155093    8968 command_runner.go:130] > StartLimitIntervalSec=60
	I0910 19:58:58.155170    8968 command_runner.go:130] > [Service]
	I0910 19:58:58.155170    8968 command_runner.go:130] > Type=notify
	I0910 19:58:58.155170    8968 command_runner.go:130] > Restart=on-failure
	I0910 19:58:58.155170    8968 command_runner.go:130] > Environment=NO_PROXY=172.31.215.172
	I0910 19:58:58.155170    8968 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0910 19:58:58.155170    8968 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0910 19:58:58.155170    8968 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0910 19:58:58.155253    8968 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0910 19:58:58.155253    8968 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0910 19:58:58.155253    8968 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0910 19:58:58.155253    8968 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0910 19:58:58.155253    8968 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0910 19:58:58.155337    8968 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0910 19:58:58.155337    8968 command_runner.go:130] > ExecStart=
	I0910 19:58:58.155337    8968 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0910 19:58:58.155337    8968 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0910 19:58:58.155410    8968 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0910 19:58:58.155410    8968 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0910 19:58:58.155410    8968 command_runner.go:130] > LimitNOFILE=infinity
	I0910 19:58:58.155410    8968 command_runner.go:130] > LimitNPROC=infinity
	I0910 19:58:58.155410    8968 command_runner.go:130] > LimitCORE=infinity
	I0910 19:58:58.155410    8968 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0910 19:58:58.155479    8968 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0910 19:58:58.155479    8968 command_runner.go:130] > TasksMax=infinity
	I0910 19:58:58.155479    8968 command_runner.go:130] > TimeoutStartSec=0
	I0910 19:58:58.155541    8968 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0910 19:58:58.155565    8968 command_runner.go:130] > Delegate=yes
	I0910 19:58:58.155565    8968 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0910 19:58:58.155594    8968 command_runner.go:130] > KillMode=process
	I0910 19:58:58.155594    8968 command_runner.go:130] > [Install]
	I0910 19:58:58.155594    8968 command_runner.go:130] > WantedBy=multi-user.target
	I0910 19:58:58.163062    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:58:58.193574    8968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 19:58:58.235402    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:58:58.265516    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 19:58:58.294415    8968 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 19:58:58.344693    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 19:58:58.368688    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 19:58:58.401898    8968 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0910 19:58:58.410974    8968 ssh_runner.go:195] Run: which cri-dockerd
	I0910 19:58:58.417197    8968 command_runner.go:130] > /usr/bin/cri-dockerd
	I0910 19:58:58.425577    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 19:58:58.442117    8968 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 19:58:58.479219    8968 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 19:58:58.670245    8968 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 19:58:58.838491    8968 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 19:58:58.838557    8968 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 19:58:58.880336    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:58:59.046519    8968 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 19:59:01.690700    8968 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6440044s)
	I0910 19:59:01.700354    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 19:59:01.734787    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 19:59:01.767934    8968 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 19:59:01.957340    8968 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 19:59:02.132422    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:59:02.305370    8968 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 19:59:02.344985    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 19:59:02.374215    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:59:02.553319    8968 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 19:59:02.648803    8968 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 19:59:02.657346    8968 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 19:59:02.665338    8968 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0910 19:59:02.665338    8968 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0910 19:59:02.665338    8968 command_runner.go:130] > Device: 0,22	Inode: 866         Links: 1
	I0910 19:59:02.665338    8968 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0910 19:59:02.665338    8968 command_runner.go:130] > Access: 2024-09-10 19:59:02.797986721 +0000
	I0910 19:59:02.665338    8968 command_runner.go:130] > Modify: 2024-09-10 19:59:02.797986721 +0000
	I0910 19:59:02.665338    8968 command_runner.go:130] > Change: 2024-09-10 19:59:02.800987138 +0000
	I0910 19:59:02.665338    8968 command_runner.go:130] >  Birth: -
	I0910 19:59:02.665338    8968 start.go:563] Will wait 60s for crictl version
	I0910 19:59:02.672339    8968 ssh_runner.go:195] Run: which crictl
	I0910 19:59:02.678477    8968 command_runner.go:130] > /usr/bin/crictl
	I0910 19:59:02.686231    8968 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 19:59:02.733057    8968 command_runner.go:130] > Version:  0.1.0
	I0910 19:59:02.733162    8968 command_runner.go:130] > RuntimeName:  docker
	I0910 19:59:02.733162    8968 command_runner.go:130] > RuntimeVersion:  27.2.0
	I0910 19:59:02.733162    8968 command_runner.go:130] > RuntimeApiVersion:  v1
	I0910 19:59:02.735131    8968 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0910 19:59:02.743347    8968 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 19:59:02.776022    8968 command_runner.go:130] > 27.2.0
	I0910 19:59:02.784928    8968 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 19:59:02.819061    8968 command_runner.go:130] > 27.2.0
	I0910 19:59:02.822643    8968 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0910 19:59:02.825640    8968 out.go:177]   - env NO_PROXY=172.31.215.172
	I0910 19:59:02.827666    8968 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0910 19:59:02.831639    8968 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0910 19:59:02.831639    8968 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0910 19:59:02.831639    8968 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0910 19:59:02.831639    8968 ip.go:211] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:36:e6 Flags:up|broadcast|multicast|running}
	I0910 19:59:02.833638    8968 ip.go:214] interface addr: fe80::bc85:cd58:cadb:2533/64
	I0910 19:59:02.833638    8968 ip.go:214] interface addr: 172.31.208.1/20
	I0910 19:59:02.841651    8968 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0910 19:59:02.847815    8968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:59:02.868243    8968 mustload.go:65] Loading cluster: multinode-629100
	I0910 19:59:02.869062    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:59:02.869656    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:59:04.719635    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:59:04.720073    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:04.720073    8968 host.go:66] Checking if "multinode-629100" exists ...
	I0910 19:59:04.720761    8968 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100 for IP: 172.31.210.34
	I0910 19:59:04.720761    8968 certs.go:194] generating shared ca certs ...
	I0910 19:59:04.720761    8968 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:59:04.721293    8968 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0910 19:59:04.721608    8968 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0910 19:59:04.721754    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 19:59:04.722043    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0910 19:59:04.722182    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 19:59:04.722327    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 19:59:04.722855    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem (1338 bytes)
	W0910 19:59:04.723133    8968 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724_empty.pem, impossibly tiny 0 bytes
	I0910 19:59:04.723271    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0910 19:59:04.723555    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0910 19:59:04.723890    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0910 19:59:04.723930    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0910 19:59:04.724462    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem (1708 bytes)
	I0910 19:59:04.724770    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /usr/share/ca-certificates/47242.pem
	I0910 19:59:04.724918    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:59:04.725065    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem -> /usr/share/ca-certificates/4724.pem
	I0910 19:59:04.725289    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 19:59:04.775335    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0910 19:59:04.818810    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 19:59:04.864575    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 19:59:04.911974    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /usr/share/ca-certificates/47242.pem (1708 bytes)
	I0910 19:59:04.965545    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 19:59:05.007719    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem --> /usr/share/ca-certificates/4724.pem (1338 bytes)
	I0910 19:59:05.065810    8968 ssh_runner.go:195] Run: openssl version
	I0910 19:59:05.074328    8968 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0910 19:59:05.086104    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4724.pem && ln -fs /usr/share/ca-certificates/4724.pem /etc/ssl/certs/4724.pem"
	I0910 19:59:05.116218    8968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4724.pem
	I0910 19:59:05.122357    8968 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 19:59:05.122457    8968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 19:59:05.131346    8968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4724.pem
	I0910 19:59:05.138968    8968 command_runner.go:130] > 51391683
	I0910 19:59:05.149143    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4724.pem /etc/ssl/certs/51391683.0"
	I0910 19:59:05.177207    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47242.pem && ln -fs /usr/share/ca-certificates/47242.pem /etc/ssl/certs/47242.pem"
	I0910 19:59:05.206663    8968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47242.pem
	I0910 19:59:05.213638    8968 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 19:59:05.213638    8968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 19:59:05.223372    8968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47242.pem
	I0910 19:59:05.232542    8968 command_runner.go:130] > 3ec20f2e
	I0910 19:59:05.241926    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47242.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 19:59:05.267436    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 19:59:05.301569    8968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:59:05.309084    8968 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:59:05.309084    8968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:59:05.317909    8968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:59:05.325840    8968 command_runner.go:130] > b5213941
	I0910 19:59:05.336138    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 19:59:05.364657    8968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 19:59:05.371821    8968 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 19:59:05.372349    8968 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 19:59:05.372589    8968 kubeadm.go:934] updating node {m02 172.31.210.34 8443 v1.31.0 docker false true} ...
	I0910 19:59:05.372759    8968 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-629100-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.210.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 19:59:05.381403    8968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 19:59:05.400696    8968 command_runner.go:130] > kubeadm
	I0910 19:59:05.400751    8968 command_runner.go:130] > kubectl
	I0910 19:59:05.400751    8968 command_runner.go:130] > kubelet
	I0910 19:59:05.400751    8968 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 19:59:05.411313    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0910 19:59:05.428417    8968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0910 19:59:05.456723    8968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 19:59:05.493514    8968 ssh_runner.go:195] Run: grep 172.31.215.172	control-plane.minikube.internal$ /etc/hosts
	I0910 19:59:05.499968    8968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.215.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:59:05.528089    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:59:05.708835    8968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:59:05.735452    8968 host.go:66] Checking if "multinode-629100" exists ...
	I0910 19:59:05.736454    8968 start.go:317] joinCluster: &{Name:multinode-629100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.215.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.210.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.210.110 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:59:05.736454    8968 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.31.210.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0910 19:59:05.736454    8968 host.go:66] Checking if "multinode-629100-m02" exists ...
	I0910 19:59:05.736454    8968 mustload.go:65] Loading cluster: multinode-629100
	I0910 19:59:05.737451    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:59:05.737451    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:59:07.666861    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:59:07.666861    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:07.666861    8968 host.go:66] Checking if "multinode-629100" exists ...
	I0910 19:59:07.667489    8968 api_server.go:166] Checking apiserver status ...
	I0910 19:59:07.675811    8968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:59:07.675811    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:59:09.577182    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:59:09.577182    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:09.577182    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:59:11.834950    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:59:11.835415    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:11.835760    8968 sshutil.go:53] new ssh client: &{IP:172.31.215.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 19:59:11.973428    8968 command_runner.go:130] > 1954
	I0910 19:59:11.973506    8968 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.297409s)
	I0910 19:59:11.981912    8968 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1954/cgroup
	W0910 19:59:11.998917    8968 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1954/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 19:59:12.010921    8968 ssh_runner.go:195] Run: ls
	I0910 19:59:12.017848    8968 api_server.go:253] Checking apiserver healthz at https://172.31.215.172:8443/healthz ...
	I0910 19:59:12.027154    8968 api_server.go:279] https://172.31.215.172:8443/healthz returned 200:
	ok
	I0910 19:59:12.036465    8968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl drain multinode-629100-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0910 19:59:12.212577    8968 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-5crht, kube-system/kube-proxy-qqrrg
	I0910 19:59:15.233281    8968 command_runner.go:130] > node/multinode-629100-m02 cordoned
	I0910 19:59:15.234104    8968 command_runner.go:130] > pod "busybox-7dff88458-7c4qt" has DeletionTimestamp older than 1 seconds, skipping
	I0910 19:59:15.234104    8968 command_runner.go:130] > node/multinode-629100-m02 drained
	I0910 19:59:15.234224    8968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl drain multinode-629100-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.1975453s)
	I0910 19:59:15.234330    8968 node.go:128] successfully drained node "multinode-629100-m02"
	I0910 19:59:15.234510    8968 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0910 19:59:15.234652    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:59:17.078768    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:59:17.078768    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:17.078768    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:59:19.300971    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:59:19.301462    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:19.301756    8968 sshutil.go:53] new ssh client: &{IP:172.31.210.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\id_rsa Username:docker}
	I0910 19:59:19.716778    8968 command_runner.go:130] ! W0910 19:59:19.944102    1618 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0910 19:59:19.919951    8968 command_runner.go:130] ! W0910 19:59:20.147186    1618 cleanupnode.go:105] [reset] Failed to remove containers: failed to stop running pod 7fc605afb9d0eb24f26fc2e076eefe2380c407d03c61cca1840fcb37641ffa4f: rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod "busybox-7dff88458-7c4qt_default" network: cni config uninitialized
	I0910 19:59:19.937203    8968 command_runner.go:130] > [preflight] Running pre-flight checks
	I0910 19:59:19.937320    8968 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0910 19:59:19.937320    8968 command_runner.go:130] > [reset] Stopping the kubelet service
	I0910 19:59:19.937320    8968 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0910 19:59:19.937320    8968 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0910 19:59:19.937440    8968 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0910 19:59:19.937440    8968 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0910 19:59:19.937507    8968 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0910 19:59:19.937538    8968 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0910 19:59:19.937538    8968 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0910 19:59:19.937580    8968 command_runner.go:130] > to reset your system's IPVS tables.
	I0910 19:59:19.937601    8968 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0910 19:59:19.937601    8968 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0910 19:59:19.937601    8968 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (4.7027282s)
	I0910 19:59:19.937601    8968 node.go:155] successfully reset node "multinode-629100-m02"
	I0910 19:59:19.939296    8968 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 19:59:19.939463    8968 kapi.go:59] client config for multinode-629100: &rest.Config{Host:"https://172.31.215.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 19:59:19.940589    8968 cert_rotation.go:140] Starting client certificate rotation controller
	I0910 19:59:19.941110    8968 request.go:1351] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0910 19:59:19.941170    8968 round_trippers.go:463] DELETE https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:19.941170    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:19.941170    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:19.941170    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:19.941170    8968 round_trippers.go:473]     Content-Type: application/json
	I0910 19:59:19.961966    8968 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0910 19:59:19.961966    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:19.961966    8968 round_trippers.go:580]     Content-Length: 171
	I0910 19:59:19.961966    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:20 GMT
	I0910 19:59:19.961966    8968 round_trippers.go:580]     Audit-Id: 95943bc8-10fc-4acf-8db3-39c4acf08412
	I0910 19:59:19.961966    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:19.961966    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:19.961966    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:19.961966    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:19.961966    8968 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-629100-m02","kind":"nodes","uid":"a82f3bc4-899c-406b-b321-16365e535c5d"}}
	I0910 19:59:19.961966    8968 node.go:180] successfully deleted node "multinode-629100-m02"
	I0910 19:59:19.961966    8968 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.31.210.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0910 19:59:19.962994    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0910 19:59:19.962994    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:59:21.791387    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:59:21.791899    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:21.791899    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:59:24.053503    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:59:24.053593    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:24.053916    8968 sshutil.go:53] new ssh client: &{IP:172.31.215.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 19:59:24.220825    8968 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token tizv8b.w1fjagtp22n8yb3v --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b 
	I0910 19:59:24.220900    8968 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.2576208s)
	I0910 19:59:24.220986    8968 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.31.210.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0910 19:59:24.220986    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tizv8b.w1fjagtp22n8yb3v --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-629100-m02"
	I0910 19:59:24.395081    8968 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:59:25.722116    8968 command_runner.go:130] > [preflight] Running pre-flight checks
	I0910 19:59:25.722317    8968 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0910 19:59:25.722317    8968 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0910 19:59:25.722317    8968 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:59:25.722317    8968 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:59:25.722317    8968 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0910 19:59:25.722425    8968 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 19:59:25.722425    8968 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001927576s
	I0910 19:59:25.722425    8968 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0910 19:59:25.722425    8968 command_runner.go:130] > This node has joined the cluster:
	I0910 19:59:25.722425    8968 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0910 19:59:25.722425    8968 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0910 19:59:25.722425    8968 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0910 19:59:25.722724    8968 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tizv8b.w1fjagtp22n8yb3v --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-629100-m02": (1.5015967s)
	I0910 19:59:25.722724    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0910 19:59:25.923187    8968 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0910 19:59:26.098950    8968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-629100-m02 minikube.k8s.io/updated_at=2024_09_10T19_59_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=multinode-629100 minikube.k8s.io/primary=false
	I0910 19:59:26.217633    8968 command_runner.go:130] > node/multinode-629100-m02 labeled
	I0910 19:59:26.221028    8968 start.go:319] duration metric: took 20.4832075s to joinCluster
	I0910 19:59:26.221225    8968 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.31.210.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0910 19:59:26.222699    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:59:26.224788    8968 out.go:177] * Verifying Kubernetes components...
	I0910 19:59:26.235193    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:59:26.444869    8968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:59:26.471340    8968 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 19:59:26.472117    8968 kapi.go:59] client config for multinode-629100: &rest.Config{Host:"https://172.31.215.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 19:59:26.473026    8968 node_ready.go:35] waiting up to 6m0s for node "multinode-629100-m02" to be "Ready" ...
	I0910 19:59:26.473326    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:26.473388    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:26.473388    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:26.473388    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:26.476153    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:26.477214    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:26.477214    8968 round_trippers.go:580]     Audit-Id: b2b1f0e0-6dee-434e-a1a3-9948d4f5a701
	I0910 19:59:26.477214    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:26.477214    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:26.477214    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:26.477214    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:26.477214    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:26 GMT
	I0910 19:59:26.477381    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1947","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3559 chars]
	I0910 19:59:26.979410    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:26.979505    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:26.979572    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:26.979572    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:26.986618    8968 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:59:26.986618    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:26.986618    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:26.986618    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:26.986618    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:26.986618    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:27 GMT
	I0910 19:59:26.986618    8968 round_trippers.go:580]     Audit-Id: 41b91e8b-8422-413f-8bd8-e6412595a49d
	I0910 19:59:26.986618    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:26.986618    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1947","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3559 chars]
	I0910 19:59:27.482698    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:27.482782    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:27.482782    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:27.482782    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:27.486684    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:27.486684    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:27.486684    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:27 GMT
	I0910 19:59:27.486781    8968 round_trippers.go:580]     Audit-Id: 52689a91-309c-423c-8aa2-a9c71397e8ac
	I0910 19:59:27.486781    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:27.486781    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:27.486781    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:27.486781    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:27.486964    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1947","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3559 chars]
	I0910 19:59:27.974263    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:27.974263    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:27.974263    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:27.974263    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:27.978048    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:27.978048    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:27.978048    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:28 GMT
	I0910 19:59:27.978048    8968 round_trippers.go:580]     Audit-Id: 50acc0b1-5e33-4bc2-8da2-8ddd51782af0
	I0910 19:59:27.978048    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:27.978048    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:27.978048    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:27.978048    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:27.978048    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1947","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3559 chars]
	I0910 19:59:28.481599    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:28.481694    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:28.481694    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:28.481694    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:28.485258    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:28.485442    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:28.485442    8968 round_trippers.go:580]     Audit-Id: c1371e6a-e639-463c-ab14-5d1ca774b106
	I0910 19:59:28.485511    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:28.485511    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:28.485587    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:28.485635    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:28.485635    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:28 GMT
	I0910 19:59:28.485836    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1947","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3559 chars]
	I0910 19:59:28.485864    8968 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:59:28.973934    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:28.973996    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:28.973996    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:28.973996    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:28.977327    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:28.977327    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:28.977327    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:28.977327    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:29 GMT
	I0910 19:59:28.977327    8968 round_trippers.go:580]     Audit-Id: 5895e6b3-2911-4a19-881f-8f398429fbb8
	I0910 19:59:28.977905    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:28.977905    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:28.977905    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:28.978046    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1947","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3559 chars]
	I0910 19:59:29.475701    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:29.475853    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:29.475853    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:29.475853    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:29.478701    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:29.478701    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:29.479583    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:29.479583    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:29 GMT
	I0910 19:59:29.479583    8968 round_trippers.go:580]     Audit-Id: 7d9eeda5-40a2-4b9c-92b6-bfd3c992fe51
	I0910 19:59:29.479583    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:29.479583    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:29.479583    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:29.479583    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1947","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3559 chars]
	I0910 19:59:29.976235    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:29.976309    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:29.976309    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:29.976309    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:29.981395    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:59:29.981395    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:29.981934    8968 round_trippers.go:580]     Audit-Id: 41fadc76-e7e1-4428-9763-3e45ff103d89
	I0910 19:59:29.981934    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:29.981934    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:29.981934    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:29.981934    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:29.981934    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:30 GMT
	I0910 19:59:29.982111    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:30.477104    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:30.477104    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:30.477179    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:30.477179    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:30.480832    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:30.480995    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:30.480995    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:30.480995    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:30.480995    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:30.480995    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:30.480995    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:30 GMT
	I0910 19:59:30.480995    8968 round_trippers.go:580]     Audit-Id: 1a29d338-467a-410f-b324-cf037f217777
	I0910 19:59:30.481171    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:30.976941    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:30.976941    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:30.976941    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:30.976941    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:30.981847    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:59:30.982266    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:30.982266    8968 round_trippers.go:580]     Audit-Id: 253ef817-3848-4eb2-b1a8-2d1326140059
	I0910 19:59:30.982266    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:30.982266    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:30.982266    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:30.982266    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:30.982266    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:31 GMT
	I0910 19:59:30.982389    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:30.982779    8968 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:59:31.477831    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:31.477898    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:31.477898    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:31.477898    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:31.481068    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:31.481602    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:31.481602    8968 round_trippers.go:580]     Audit-Id: 4c59cdbf-2917-49f1-a32e-f38970176264
	I0910 19:59:31.481602    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:31.481602    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:31.481602    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:31.481602    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:31.481602    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:31 GMT
	I0910 19:59:31.481883    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:31.979081    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:31.979081    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:31.979081    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:31.979081    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:31.981876    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:31.981876    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:31.981876    8968 round_trippers.go:580]     Audit-Id: a2d8df23-b4bc-4050-b1dd-4915e936b8da
	I0910 19:59:31.981876    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:31.981876    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:31.982618    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:31.982618    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:31.982618    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:32 GMT
	I0910 19:59:31.982618    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:32.479369    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:32.479369    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:32.479369    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:32.479369    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:32.482975    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:32.482975    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:32.482975    8968 round_trippers.go:580]     Audit-Id: b8a16908-3904-460d-9d09-a7773f73ce65
	I0910 19:59:32.482975    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:32.482975    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:32.482975    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:32.482975    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:32.483257    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:32 GMT
	I0910 19:59:32.483340    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:32.977410    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:32.977410    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:32.977715    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:32.977715    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:32.982966    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:59:32.982966    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:32.982966    8968 round_trippers.go:580]     Audit-Id: 04d0aa2c-5a6e-4665-a588-6b330f083489
	I0910 19:59:32.982966    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:32.982966    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:32.982966    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:32.982966    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:32.982966    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:33 GMT
	I0910 19:59:32.983517    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:32.983917    8968 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:59:33.478678    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:33.478678    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:33.478678    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:33.478678    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:33.484099    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:59:33.484099    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:33.484099    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:33 GMT
	I0910 19:59:33.484099    8968 round_trippers.go:580]     Audit-Id: c1983ed6-54d4-45b7-848a-2712bcecd5cf
	I0910 19:59:33.484099    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:33.484099    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:33.484099    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:33.484245    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:33.484415    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:33.978665    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:33.978727    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:33.978727    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:33.978727    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:33.982189    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:33.982189    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:33.982189    8968 round_trippers.go:580]     Audit-Id: fa250dff-3b8c-4dac-9608-70e1ed500341
	I0910 19:59:33.982189    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:33.982189    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:33.982189    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:33.982189    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:33.982189    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:34 GMT
	I0910 19:59:33.982630    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:34.480076    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:34.480286    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:34.480286    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:34.480286    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:34.483783    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:34.483783    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:34.483783    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:34.483783    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:34.483783    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:34 GMT
	I0910 19:59:34.483783    8968 round_trippers.go:580]     Audit-Id: 1c56e164-0ea4-443f-b5c4-a14044386ce3
	I0910 19:59:34.483783    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:34.483783    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:34.484218    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:34.981099    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:34.981183    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:34.981264    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:34.981264    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:34.984941    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:34.985016    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:34.985016    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:35 GMT
	I0910 19:59:34.985016    8968 round_trippers.go:580]     Audit-Id: 46bb7b05-8d5b-4459-983d-41a4dade3324
	I0910 19:59:34.985016    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:34.985082    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:34.985082    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:34.985082    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:34.985351    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:34.985997    8968 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:59:35.484822    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:35.484822    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:35.484822    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:35.484822    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:35.487724    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:35.488656    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:35.488656    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:35.488656    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:35 GMT
	I0910 19:59:35.488656    8968 round_trippers.go:580]     Audit-Id: 37aec342-1718-4374-a0d0-543d2d25d7d5
	I0910 19:59:35.488656    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:35.488656    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:35.488656    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:35.488656    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:35.988545    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:35.988545    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:35.988618    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:35.988618    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:35.991291    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:35.991800    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:35.991800    8968 round_trippers.go:580]     Audit-Id: 93060038-eacf-4b30-a595-c00ae1ffe533
	I0910 19:59:35.991859    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:35.991859    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:35.991859    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:35.991859    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:35.991859    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:36 GMT
	I0910 19:59:35.991926    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:36.489235    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:36.489235    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:36.489235    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:36.489235    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:36.492950    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:36.493345    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:36.493345    8968 round_trippers.go:580]     Audit-Id: c23a9b68-9ed6-471a-9602-0f5d414c330c
	I0910 19:59:36.493345    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:36.493345    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:36.493345    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:36.493345    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:36.493345    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:36 GMT
	I0910 19:59:36.493345    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:36.988810    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:36.988923    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:36.988923    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:36.988923    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:36.993977    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:59:36.993977    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:36.994071    8968 round_trippers.go:580]     Audit-Id: 40708c38-0382-4cce-8ff8-5be3919c531f
	I0910 19:59:36.994071    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:36.994071    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:36.994071    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:36.994071    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:36.994071    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:37 GMT
	I0910 19:59:36.994357    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:36.995014    8968 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:59:37.490334    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:37.490334    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:37.490410    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:37.490410    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:37.494556    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:59:37.495301    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:37.495301    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:37.495301    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:37 GMT
	I0910 19:59:37.495301    8968 round_trippers.go:580]     Audit-Id: 9824b441-32fb-492a-a108-a02f9b577435
	I0910 19:59:37.495301    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:37.495301    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:37.495429    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:37.495619    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:37.976510    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:37.976589    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:37.976589    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:37.976666    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:37.982543    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:59:37.982543    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:37.982543    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:37.982543    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:37.982543    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:38 GMT
	I0910 19:59:37.982543    8968 round_trippers.go:580]     Audit-Id: f2233a22-321a-4a6f-9e07-193ab46cd78d
	I0910 19:59:37.982543    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:37.982543    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:37.983242    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:38.476132    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:38.476132    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:38.476249    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:38.476249    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:38.480770    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:59:38.481029    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:38.481029    8968 round_trippers.go:580]     Audit-Id: 362f4f44-720b-4a3f-b67b-58f44eb3d5b2
	I0910 19:59:38.481029    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:38.481029    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:38.481029    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:38.481029    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:38.481029    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:38 GMT
	I0910 19:59:38.481298    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:38.978179    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:38.978179    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:38.978258    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:38.978258    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:38.984809    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:59:38.984809    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:38.984809    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:38.984809    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:38.984809    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:38.984809    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:38.984809    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:39 GMT
	I0910 19:59:38.984809    8968 round_trippers.go:580]     Audit-Id: 5c9bef8b-eaee-4f1f-8e0f-469840c7699d
	I0910 19:59:38.984809    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:39.476667    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:39.476740    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:39.476740    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:39.476740    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:39.480411    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:39.480761    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:39.480761    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:39.480761    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:39.480761    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:39.480852    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:39.480852    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:39 GMT
	I0910 19:59:39.480852    8968 round_trippers.go:580]     Audit-Id: 204e5a9e-ed7d-433c-a657-47c4421aad1f
	I0910 19:59:39.481168    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:39.481925    8968 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:59:39.977492    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:39.977492    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:39.977492    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:39.977492    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:39.982755    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:59:39.982755    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:39.982755    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:39.982755    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:39.982755    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:40 GMT
	I0910 19:59:39.982755    8968 round_trippers.go:580]     Audit-Id: 89774f9d-a748-4735-8621-26ff0e497d18
	I0910 19:59:39.982755    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:39.982755    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:39.983056    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:40.475411    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:40.475411    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:40.475411    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:40.475411    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:40.478982    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:40.478982    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:40.478982    8968 round_trippers.go:580]     Audit-Id: c55e79f5-bb7c-46fe-a7aa-d1911499412b
	I0910 19:59:40.478982    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:40.479487    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:40.479487    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:40.479487    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:40.479487    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:40 GMT
	I0910 19:59:40.479753    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:40.988926    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:40.988926    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:40.989000    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:40.989000    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:40.992680    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:40.992680    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:40.992680    8968 round_trippers.go:580]     Audit-Id: a8eb34e3-12c2-4c37-9ca3-f24a50cd7ac7
	I0910 19:59:40.992680    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:40.992788    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:40.992788    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:40.992788    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:40.992788    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:41 GMT
	I0910 19:59:40.992980    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:41.487748    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:41.487818    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:41.487886    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:41.487886    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:41.492278    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:59:41.492311    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:41.492311    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:41.492311    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:41.492311    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:41.492311    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:41.492311    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:41 GMT
	I0910 19:59:41.492311    8968 round_trippers.go:580]     Audit-Id: 5e01bc85-f1a1-466c-90d4-a4363185bf95
	I0910 19:59:41.492311    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:41.493152    8968 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:59:41.989732    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:41.989732    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:41.989732    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:41.989732    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:41.993382    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:41.993382    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:41.993382    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:41.993382    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:42 GMT
	I0910 19:59:41.993382    8968 round_trippers.go:580]     Audit-Id: b0832d61-c72f-4604-9d9e-3d263876acf3
	I0910 19:59:41.993382    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:41.993382    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:41.993382    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:41.994436    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:42.477425    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:42.477511    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:42.477511    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:42.477511    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:42.481375    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:42.481483    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:42.481560    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:42.481560    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:42.481560    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:42.481560    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:42.481560    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:42 GMT
	I0910 19:59:42.481704    8968 round_trippers.go:580]     Audit-Id: 01b7da05-56f8-4e9b-bf0a-3155f1c8b118
	I0910 19:59:42.481865    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:42.983095    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:42.983328    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:42.983328    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:42.983409    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:42.989332    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:59:42.989332    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:42.989332    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:42.989332    8968 round_trippers.go:580]     Audit-Id: 32a887cf-d6d4-4ba6-8075-c4e5fb15cfbb
	I0910 19:59:42.989332    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:42.989332    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:42.989332    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:42.989332    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:42.989332    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1990","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3926 chars]
	I0910 19:59:42.990398    8968 node_ready.go:49] node "multinode-629100-m02" has status "Ready":"True"
	I0910 19:59:42.990491    8968 node_ready.go:38] duration metric: took 16.5161989s for node "multinode-629100-m02" to be "Ready" ...
	I0910 19:59:42.990491    8968 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:59:42.990617    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods
	I0910 19:59:42.990667    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:42.990695    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:42.990695    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:42.994112    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:42.994112    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:42.994112    8968 round_trippers.go:580]     Audit-Id: 5ae8eeaa-277c-461e-b272-0261c43edfa3
	I0910 19:59:42.994112    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:42.994112    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:42.994112    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:42.994112    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:42.994112    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:42.995111    8968 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1992"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1803","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89569 chars]
	I0910 19:59:42.999566    8968 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:42.999566    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:59:42.999566    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:42.999566    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:42.999566    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.002358    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:43.002358    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.002358    8968 round_trippers.go:580]     Audit-Id: d909d94a-4b5d-417c-a1d2-79b95ff612f6
	I0910 19:59:43.002358    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.002358    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.002358    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.002358    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.002358    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.002992    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1803","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7046 chars]
	I0910 19:59:43.003660    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:59:43.003719    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.003719    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.003719    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.005677    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 19:59:43.005677    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.005677    8968 round_trippers.go:580]     Audit-Id: 3a82626a-8872-40f5-9e53-38f4605b11f2
	I0910 19:59:43.005677    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.005677    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.005677    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.005677    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.005677    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.006215    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:59:43.006610    8968 pod_ready.go:93] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"True"
	I0910 19:59:43.006675    8968 pod_ready.go:82] duration metric: took 7.108ms for pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.006675    8968 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.006800    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-629100
	I0910 19:59:43.006800    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.006800    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.006800    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.009146    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:43.009146    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.009146    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.009618    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.009618    8968 round_trippers.go:580]     Audit-Id: 4be1e62c-ded0-48a7-a637-ff0c9a6d7897
	I0910 19:59:43.009618    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.009618    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.009618    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.009710    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-629100","namespace":"kube-system","uid":"2e6b0a6d-5705-427c-89e2-ff2b3ca25cb6","resourceVersion":"1766","creationTimestamp":"2024-09-10T19:56:47Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.215.172:2379","kubernetes.io/config.hash":"fe5e551808132ccc0e55277539c9741f","kubernetes.io/config.mirror":"fe5e551808132ccc0e55277539c9741f","kubernetes.io/config.seen":"2024-09-10T19:56:41.600229288Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:56:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6617 chars]
	I0910 19:59:43.010257    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:59:43.010257    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.010257    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.010257    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.013034    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:43.013359    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.013359    8968 round_trippers.go:580]     Audit-Id: ba612ca7-9f21-4535-9350-a8557ffb79c6
	I0910 19:59:43.013359    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.013410    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.013410    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.013410    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.013410    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.013588    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:59:43.014116    8968 pod_ready.go:93] pod "etcd-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:59:43.014150    8968 pod_ready.go:82] duration metric: took 7.475ms for pod "etcd-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.014150    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.014296    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-629100
	I0910 19:59:43.014296    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.014296    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.014334    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.018506    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:59:43.018506    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.018506    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.018506    8968 round_trippers.go:580]     Audit-Id: b4dff58b-0826-4660-90ce-50af642158e9
	I0910 19:59:43.018506    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.018506    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.018506    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.018506    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.019079    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-629100","namespace":"kube-system","uid":"5d262302-6bdf-4e67-bf9f-4b50d8f9fbd5","resourceVersion":"1763","creationTimestamp":"2024-09-10T19:56:47Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.215.172:8443","kubernetes.io/config.hash":"0a34f2a2bc072815a118e734662d14b6","kubernetes.io/config.mirror":"0a34f2a2bc072815a118e734662d14b6","kubernetes.io/config.seen":"2024-09-10T19:56:41.591483300Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:56:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8049 chars]
	I0910 19:59:43.019161    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:59:43.019161    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.019161    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.019161    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.021735    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:43.021735    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.021735    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.021735    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.021735    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.021735    8968 round_trippers.go:580]     Audit-Id: 5fcceb6e-c937-4989-afab-265cf1706208
	I0910 19:59:43.021735    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.021735    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.021735    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:59:43.021735    8968 pod_ready.go:93] pod "kube-apiserver-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:59:43.022721    8968 pod_ready.go:82] duration metric: took 8.5319ms for pod "kube-apiserver-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.022721    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.022721    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-629100
	I0910 19:59:43.022721    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.022721    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.022721    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.024586    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 19:59:43.024586    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.024586    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.024586    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.024586    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.024586    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.024586    8968 round_trippers.go:580]     Audit-Id: 1cf28b16-5eac-4f1e-b395-ab20eb169c43
	I0910 19:59:43.024586    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.025583    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-629100","namespace":"kube-system","uid":"60adcb0c-808a-477c-9432-83dc8f96d6c0","resourceVersion":"1770","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2a7614069b6cbf30e1abf0d1e34a8a71","kubernetes.io/config.mirror":"2a7614069b6cbf30e1abf0d1e34a8a71","kubernetes.io/config.seen":"2024-09-10T19:35:40.972009282Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0910 19:59:43.025583    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:59:43.025583    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.025583    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.025583    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.027388    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 19:59:43.027388    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.027388    8968 round_trippers.go:580]     Audit-Id: 44f7ab70-07ed-4b84-ab82-ce9658869fe0
	I0910 19:59:43.027388    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.027388    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.027388    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.027388    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.027388    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.028452    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:59:43.028789    8968 pod_ready.go:93] pod "kube-controller-manager-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:59:43.028860    8968 pod_ready.go:82] duration metric: took 6.1386ms for pod "kube-controller-manager-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.028860    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4tzx6" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.186310    8968 request.go:632] Waited for 157.3754ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tzx6
	I0910 19:59:43.186480    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tzx6
	I0910 19:59:43.186597    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.186597    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.186597    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.192186    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:59:43.192186    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.192186    8968 round_trippers.go:580]     Audit-Id: 80b61aaa-08ed-4160-89a3-1a2b09b50d6f
	I0910 19:59:43.192186    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.192186    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.192186    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.192186    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.192186    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.192832    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4tzx6","generateName":"kube-proxy-","namespace":"kube-system","uid":"9bb18c28-3ee9-4028-a61d-3d7f6ea31894","resourceVersion":"1613","creationTimestamp":"2024-09-10T19:42:55Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:42:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6433 chars]
	I0910 19:59:43.390761    8968 request.go:632] Waited for 196.9981ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 19:59:43.390905    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 19:59:43.390905    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.390905    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.390905    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.397776    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:59:43.397794    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.397794    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.397794    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.397794    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.397794    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.397794    8968 round_trippers.go:580]     Audit-Id: 4cbe491b-4944-411f-817e-9395908ded40
	I0910 19:59:43.397794    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.397794    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"39f4665c-e276-4967-a647-46b3a9a1c67a","resourceVersion":"1764","creationTimestamp":"2024-09-10T19:52:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_52_30_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4394 chars]
	I0910 19:59:43.398745    8968 pod_ready.go:98] node "multinode-629100-m03" hosting pod "kube-proxy-4tzx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100-m03" has status "Ready":"Unknown"
	I0910 19:59:43.398745    8968 pod_ready.go:82] duration metric: took 369.8604ms for pod "kube-proxy-4tzx6" in "kube-system" namespace to be "Ready" ...
	E0910 19:59:43.398745    8968 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-629100-m03" hosting pod "kube-proxy-4tzx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100-m03" has status "Ready":"Unknown"
	I0910 19:59:43.398816    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qqrrg" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.593101    8968 request.go:632] Waited for 194.2722ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqrrg
	I0910 19:59:43.593420    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqrrg
	I0910 19:59:43.593420    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.593420    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.593508    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.597679    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:59:43.597679    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.597770    8968 round_trippers.go:580]     Audit-Id: a5d23962-b82a-48ae-b5cb-6940e1b3e384
	I0910 19:59:43.597770    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.597770    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.597770    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.597770    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.597770    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.598242    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qqrrg","generateName":"kube-proxy-","namespace":"kube-system","uid":"1fc7fdda-d5e4-4c72-96c1-2348eb72b491","resourceVersion":"1960","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6203 chars]
	I0910 19:59:43.796728    8968 request.go:632] Waited for 197.0145ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:43.796799    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:43.796799    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.796799    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.796799    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.804054    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:59:43.804054    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.804054    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.804054    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.804054    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.804054    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:44 GMT
	I0910 19:59:43.804054    8968 round_trippers.go:580]     Audit-Id: e50fdf33-e254-48e0-89c7-c444dd7117c1
	I0910 19:59:43.804054    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.804054    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1990","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3926 chars]
	I0910 19:59:43.804054    8968 pod_ready.go:93] pod "kube-proxy-qqrrg" in "kube-system" namespace has status "Ready":"True"
	I0910 19:59:43.804054    8968 pod_ready.go:82] duration metric: took 405.211ms for pod "kube-proxy-qqrrg" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.805080    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wqf2d" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.983618    8968 request.go:632] Waited for 178.1884ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wqf2d
	I0910 19:59:43.983618    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wqf2d
	I0910 19:59:43.983823    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.983823    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.983922    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.988306    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:59:43.988306    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.988502    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.988502    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.988502    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.988502    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.988502    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:44 GMT
	I0910 19:59:43.988502    8968 round_trippers.go:580]     Audit-Id: 354a06b8-0595-4a89-a10c-d84a5fe4ac4b
	I0910 19:59:43.988677    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wqf2d","generateName":"kube-proxy-","namespace":"kube-system","uid":"27e7846e-e506-48e0-96b9-351b3ca91703","resourceVersion":"1662","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6405 chars]
	I0910 19:59:44.186776    8968 request.go:632] Waited for 197.2184ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:59:44.186884    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:59:44.186991    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:44.186991    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:44.186991    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:44.190426    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:44.191437    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:44.191437    8968 round_trippers.go:580]     Audit-Id: 0ec4ace3-0a37-4639-91ea-0624ff254aa1
	I0910 19:59:44.191501    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:44.191501    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:44.191501    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:44.191501    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:44.191501    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:44 GMT
	I0910 19:59:44.191697    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:59:44.192092    8968 pod_ready.go:93] pod "kube-proxy-wqf2d" in "kube-system" namespace has status "Ready":"True"
	I0910 19:59:44.192239    8968 pod_ready.go:82] duration metric: took 387.1334ms for pod "kube-proxy-wqf2d" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:44.192239    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:44.388952    8968 request.go:632] Waited for 196.625ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629100
	I0910 19:59:44.389119    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629100
	I0910 19:59:44.389119    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:44.389119    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:44.389119    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:44.395392    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:59:44.395554    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:44.395554    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:44.395554    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:44.395554    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:44 GMT
	I0910 19:59:44.395554    8968 round_trippers.go:580]     Audit-Id: a187e243-ade8-4eba-a622-4b87d1fae44d
	I0910 19:59:44.395554    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:44.395554    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:44.395554    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-629100","namespace":"kube-system","uid":"fe93d1a3-98c4-44c9-b1fb-9d90a97b97c7","resourceVersion":"1757","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4f54b0774708cd0189f8ef861ea931f0","kubernetes.io/config.mirror":"4f54b0774708cd0189f8ef861ea931f0","kubernetes.io/config.seen":"2024-09-10T19:35:40.972010383Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0910 19:59:44.593751    8968 request.go:632] Waited for 197.3713ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:59:44.594209    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:59:44.594209    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:44.594209    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:44.594448    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:44.598375    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:44.598544    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:44.598544    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:44.598544    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:44 GMT
	I0910 19:59:44.598544    8968 round_trippers.go:580]     Audit-Id: 19b06a75-46d9-4534-ab3a-9e47736a3a73
	I0910 19:59:44.598544    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:44.598544    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:44.598544    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:44.598544    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:59:44.599304    8968 pod_ready.go:93] pod "kube-scheduler-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:59:44.599304    8968 pod_ready.go:82] duration metric: took 407.0378ms for pod "kube-scheduler-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:44.599369    8968 pod_ready.go:39] duration metric: took 1.6087696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:59:44.599369    8968 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:59:44.608001    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:59:44.630296    8968 system_svc.go:56] duration metric: took 30.925ms WaitForService to wait for kubelet
	I0910 19:59:44.630296    8968 kubeadm.go:582] duration metric: took 18.4078396s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:59:44.630296    8968 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:59:44.797493    8968 request.go:632] Waited for 166.9512ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes
	I0910 19:59:44.797591    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes
	I0910 19:59:44.797591    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:44.797681    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:44.797779    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:44.801072    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:44.801072    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:44.801209    8968 round_trippers.go:580]     Audit-Id: b452cef8-1f82-4df9-8db5-e532bbc39302
	I0910 19:59:44.801209    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:44.801209    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:44.801209    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:44.801209    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:44.801209    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:45 GMT
	I0910 19:59:44.801754    8968 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1994"},"items":[{"metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15589 chars]
	I0910 19:59:44.803344    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:59:44.803427    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 19:59:44.803499    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:59:44.803499    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 19:59:44.803499    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:59:44.803499    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 19:59:44.803499    8968 node_conditions.go:105] duration metric: took 173.1915ms to run NodePressure ...
	I0910 19:59:44.803499    8968 start.go:241] waiting for startup goroutines ...
	I0910 19:59:44.803499    8968 start.go:255] writing updated cluster config ...
	I0910 19:59:44.807882    8968 out.go:201] 
	I0910 19:59:44.810694    8968 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:59:44.823639    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:59:44.823639    8968 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json ...
	I0910 19:59:44.828937    8968 out.go:177] * Starting "multinode-629100-m03" worker node in "multinode-629100" cluster
	I0910 19:59:44.833677    8968 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 19:59:44.833677    8968 cache.go:56] Caching tarball of preloaded images
	I0910 19:59:44.834202    8968 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0910 19:59:44.834202    8968 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 19:59:44.834202    8968 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json ...
	I0910 19:59:44.839736    8968 start.go:360] acquireMachinesLock for multinode-629100-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 19:59:44.839921    8968 start.go:364] duration metric: took 184.7µs to acquireMachinesLock for "multinode-629100-m03"
	I0910 19:59:44.839962    8968 start.go:96] Skipping create...Using existing machine configuration
	I0910 19:59:44.839962    8968 fix.go:54] fixHost starting: m03
	I0910 19:59:44.840595    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 19:59:46.691130    8968 main.go:141] libmachine: [stdout =====>] : Off
	
	I0910 19:59:46.691130    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:46.691506    8968 fix.go:112] recreateIfNeeded on multinode-629100-m03: state=Stopped err=<nil>
	W0910 19:59:46.691506    8968 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 19:59:46.693542    8968 out.go:177] * Restarting existing hyperv VM for "multinode-629100-m03" ...
	I0910 19:59:46.697859    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-629100-m03
	I0910 19:59:49.491873    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:59:49.492355    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:49.492355    8968 main.go:141] libmachine: Waiting for host to start...
	I0910 19:59:49.492355    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 19:59:51.472162    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:59:51.472421    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:51.472421    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 19:59:53.718573    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:59:53.718573    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:54.726797    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 19:59:56.621977    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:59:56.621977    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:56.621977    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 19:59:58.797657    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:59:58.798295    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:59.807897    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:01.780054    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:01.780054    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:01.780054    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:03.981888    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 20:00:03.981986    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:04.988356    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:06.950117    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:06.950117    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:06.950942    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:09.154880    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 20:00:09.154880    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:10.158703    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:12.145189    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:12.146199    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:12.146268    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:14.516827    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:14.516827    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:14.520303    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:16.425337    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:16.425337    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:16.426260    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:18.698335    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:18.698335    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:18.698831    8968 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json ...
	I0910 20:00:18.700918    8968 machine.go:93] provisionDockerMachine start ...
	I0910 20:00:18.701028    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:20.600667    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:20.600667    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:20.600771    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:22.848735    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:22.849124    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:22.853193    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 20:00:22.853290    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.214.220 22 <nil> <nil>}
	I0910 20:00:22.853823    8968 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 20:00:22.998775    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 20:00:22.998775    8968 buildroot.go:166] provisioning hostname "multinode-629100-m03"
	I0910 20:00:22.998775    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:24.917705    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:24.917705    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:24.918774    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:27.190971    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:27.191015    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:27.195681    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 20:00:27.196511    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.214.220 22 <nil> <nil>}
	I0910 20:00:27.196582    8968 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-629100-m03 && echo "multinode-629100-m03" | sudo tee /etc/hostname
	I0910 20:00:27.366054    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-629100-m03
	
	I0910 20:00:27.366114    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:29.303820    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:29.303820    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:29.303915    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:31.621807    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:31.622852    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:31.626757    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 20:00:31.627123    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.214.220 22 <nil> <nil>}
	I0910 20:00:31.627231    8968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-629100-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-629100-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-629100-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 20:00:31.773558    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 20:00:31.773558    8968 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0910 20:00:31.773558    8968 buildroot.go:174] setting up certificates
	I0910 20:00:31.773558    8968 provision.go:84] configureAuth start
	I0910 20:00:31.773558    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:33.622367    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:33.623009    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:33.623009    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:35.848539    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:35.848539    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:35.848834    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:37.736339    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:37.736339    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:37.736407    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:40.021628    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:40.022414    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:40.022414    8968 provision.go:143] copyHostCerts
	I0910 20:00:40.022584    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0910 20:00:40.023096    8968 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0910 20:00:40.023096    8968 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0910 20:00:40.023626    8968 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0910 20:00:40.024911    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0910 20:00:40.025224    8968 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0910 20:00:40.025224    8968 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0910 20:00:40.025616    8968 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0910 20:00:40.026931    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0910 20:00:40.027251    8968 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0910 20:00:40.027251    8968 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0910 20:00:40.027580    8968 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0910 20:00:40.028294    8968 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-629100-m03 san=[127.0.0.1 172.31.214.220 localhost minikube multinode-629100-m03]
	I0910 20:00:40.142265    8968 provision.go:177] copyRemoteCerts
	I0910 20:00:40.150325    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 20:00:40.150325    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:41.989235    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:41.989235    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:41.989326    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:44.196399    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:44.196399    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:44.196940    8968 sshutil.go:53] new ssh client: &{IP:172.31.214.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m03\id_rsa Username:docker}
	I0910 20:00:44.298062    8968 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1474571s)
	I0910 20:00:44.298153    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0910 20:00:44.298512    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0910 20:00:44.348354    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0910 20:00:44.348615    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 20:00:44.391852    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0910 20:00:44.392218    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 20:00:44.440352    8968 provision.go:87] duration metric: took 12.6659406s to configureAuth
	I0910 20:00:44.440433    8968 buildroot.go:189] setting minikube options for container-runtime
	I0910 20:00:44.441419    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 20:00:44.441545    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:46.315267    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:46.315267    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:46.316319    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:48.571380    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:48.572268    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:48.575707    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 20:00:48.576294    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.214.220 22 <nil> <nil>}
	I0910 20:00:48.576294    8968 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 20:00:48.702765    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 20:00:48.702765    8968 buildroot.go:70] root file system type: tmpfs
	I0910 20:00:48.702765    8968 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 20:00:48.703549    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:50.532870    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:50.532870    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:50.532870    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:52.747996    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:52.748706    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:52.752549    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 20:00:52.752549    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.214.220 22 <nil> <nil>}
	I0910 20:00:52.753106    8968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.31.215.172"
	Environment="NO_PROXY=172.31.215.172,172.31.210.34"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 20:00:52.908909    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.31.215.172
	Environment=NO_PROXY=172.31.215.172,172.31.210.34
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 20:00:52.908963    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:54.763022    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:54.763022    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:54.763331    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:56.979083    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:56.980033    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:56.983942    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 20:00:56.984101    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.214.220 22 <nil> <nil>}
	I0910 20:00:56.984101    8968 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 20:00:59.242727    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 20:00:59.243370    8968 machine.go:96] duration metric: took 40.5397199s to provisionDockerMachine
	I0910 20:00:59.243370    8968 start.go:293] postStartSetup for "multinode-629100-m03" (driver="hyperv")
	I0910 20:00:59.243370    8968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 20:00:59.254063    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 20:00:59.254063    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:01:01.100168    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:01.100168    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:01.100544    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:01:03.425307    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:01:03.425307    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:03.426112    8968 sshutil.go:53] new ssh client: &{IP:172.31.214.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m03\id_rsa Username:docker}
	I0910 20:01:03.538250    8968 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2838973s)
	I0910 20:01:03.551308    8968 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 20:01:03.557944    8968 command_runner.go:130] > NAME=Buildroot
	I0910 20:01:03.557944    8968 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0910 20:01:03.557944    8968 command_runner.go:130] > ID=buildroot
	I0910 20:01:03.557944    8968 command_runner.go:130] > VERSION_ID=2023.02.9
	I0910 20:01:03.557944    8968 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0910 20:01:03.557944    8968 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 20:01:03.557944    8968 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0910 20:01:03.558471    8968 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0910 20:01:03.559001    8968 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> 47242.pem in /etc/ssl/certs
	I0910 20:01:03.559119    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /etc/ssl/certs/47242.pem
	I0910 20:01:03.567667    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 20:01:03.586833    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /etc/ssl/certs/47242.pem (1708 bytes)
	I0910 20:01:03.632959    8968 start.go:296] duration metric: took 4.3892924s for postStartSetup
	I0910 20:01:03.632959    8968 fix.go:56] duration metric: took 1m18.787698s for fixHost
	I0910 20:01:03.632959    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:01:05.540885    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:05.541289    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:05.541289    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:01:07.834299    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:01:07.834299    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:07.838641    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 20:01:07.838641    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.214.220 22 <nil> <nil>}
	I0910 20:01:07.838641    8968 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 20:01:07.983174    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725998468.203656123
	
	I0910 20:01:07.983265    8968 fix.go:216] guest clock: 1725998468.203656123
	I0910 20:01:07.983265    8968 fix.go:229] Guest: 2024-09-10 20:01:08.203656123 +0000 UTC Remote: 2024-09-10 20:01:03.6329591 +0000 UTC m=+372.332579001 (delta=4.570697023s)
	I0910 20:01:07.983415    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:01:09.891034    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:09.891034    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:09.891893    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:01:12.207946    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:01:12.207946    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:12.211930    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 20:01:12.212602    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.214.220 22 <nil> <nil>}
	I0910 20:01:12.212602    8968 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1725998467
	I0910 20:01:12.363123    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Sep 10 20:01:07 UTC 2024
	
	I0910 20:01:12.363123    8968 fix.go:236] clock set: Tue Sep 10 20:01:07 UTC 2024
	 (err=<nil>)
	I0910 20:01:12.363123    8968 start.go:83] releasing machines lock for "multinode-629100-m03", held for 1m27.5172728s
	I0910 20:01:12.363123    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:01:14.268870    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:14.269588    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:14.269588    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:01:16.565147    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:01:16.565147    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:16.567842    8968 out.go:177] * Found network options:
	I0910 20:01:16.570467    8968 out.go:177]   - NO_PROXY=172.31.215.172,172.31.210.34
	W0910 20:01:16.572933    8968 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 20:01:16.572933    8968 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 20:01:16.575423    8968 out.go:177]   - NO_PROXY=172.31.215.172,172.31.210.34
	W0910 20:01:16.577799    8968 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 20:01:16.577847    8968 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 20:01:16.578889    8968 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 20:01:16.578966    8968 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 20:01:16.581113    8968 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0910 20:01:16.581167    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:01:16.588473    8968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0910 20:01:16.588473    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:01:18.537486    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:18.537650    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:18.537650    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:01:18.560126    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:18.560126    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:18.560488    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:01:20.893827    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:01:20.893827    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:20.893827    8968 sshutil.go:53] new ssh client: &{IP:172.31.214.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m03\id_rsa Username:docker}
	I0910 20:01:20.947432    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:01:20.947514    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:20.947835    8968 sshutil.go:53] new ssh client: &{IP:172.31.214.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m03\id_rsa Username:docker}
	I0910 20:01:20.988217    8968 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0910 20:01:20.989433    8968 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.4079643s)
	W0910 20:01:20.989483    8968 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0910 20:01:21.039634    8968 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0910 20:01:21.040389    8968 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.4516151s)
	W0910 20:01:21.040389    8968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 20:01:21.050898    8968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 20:01:21.080176    8968 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0910 20:01:21.080251    8968 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 20:01:21.080251    8968 start.go:495] detecting cgroup driver to use...
	I0910 20:01:21.080433    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0910 20:01:21.112127    8968 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0910 20:01:21.112195    8968 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0910 20:01:21.120440    8968 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0910 20:01:21.128244    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 20:01:21.160340    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 20:01:21.179447    8968 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 20:01:21.190340    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 20:01:21.220345    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 20:01:21.251859    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 20:01:21.279243    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 20:01:21.308497    8968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 20:01:21.348781    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 20:01:21.379030    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 20:01:21.414108    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 20:01:21.444908    8968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 20:01:21.463534    8968 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0910 20:01:21.474116    8968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 20:01:21.504191    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 20:01:21.710944    8968 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 20:01:21.741156    8968 start.go:495] detecting cgroup driver to use...
	I0910 20:01:21.749419    8968 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 20:01:21.772677    8968 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0910 20:01:21.772677    8968 command_runner.go:130] > [Unit]
	I0910 20:01:21.772677    8968 command_runner.go:130] > Description=Docker Application Container Engine
	I0910 20:01:21.772677    8968 command_runner.go:130] > Documentation=https://docs.docker.com
	I0910 20:01:21.773219    8968 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0910 20:01:21.773219    8968 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0910 20:01:21.773219    8968 command_runner.go:130] > StartLimitBurst=3
	I0910 20:01:21.773219    8968 command_runner.go:130] > StartLimitIntervalSec=60
	I0910 20:01:21.773219    8968 command_runner.go:130] > [Service]
	I0910 20:01:21.773308    8968 command_runner.go:130] > Type=notify
	I0910 20:01:21.773339    8968 command_runner.go:130] > Restart=on-failure
	I0910 20:01:21.773339    8968 command_runner.go:130] > Environment=NO_PROXY=172.31.215.172
	I0910 20:01:21.773339    8968 command_runner.go:130] > Environment=NO_PROXY=172.31.215.172,172.31.210.34
	I0910 20:01:21.773339    8968 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0910 20:01:21.773429    8968 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0910 20:01:21.773429    8968 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0910 20:01:21.773429    8968 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0910 20:01:21.773429    8968 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0910 20:01:21.773517    8968 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0910 20:01:21.773578    8968 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0910 20:01:21.773578    8968 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0910 20:01:21.773578    8968 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0910 20:01:21.773578    8968 command_runner.go:130] > ExecStart=
	I0910 20:01:21.773578    8968 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0910 20:01:21.773696    8968 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0910 20:01:21.773696    8968 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0910 20:01:21.773696    8968 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0910 20:01:21.773737    8968 command_runner.go:130] > LimitNOFILE=infinity
	I0910 20:01:21.773737    8968 command_runner.go:130] > LimitNPROC=infinity
	I0910 20:01:21.773737    8968 command_runner.go:130] > LimitCORE=infinity
	I0910 20:01:21.773737    8968 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0910 20:01:21.773737    8968 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0910 20:01:21.773737    8968 command_runner.go:130] > TasksMax=infinity
	I0910 20:01:21.773737    8968 command_runner.go:130] > TimeoutStartSec=0
	I0910 20:01:21.773835    8968 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0910 20:01:21.773835    8968 command_runner.go:130] > Delegate=yes
	I0910 20:01:21.773835    8968 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0910 20:01:21.773835    8968 command_runner.go:130] > KillMode=process
	I0910 20:01:21.773915    8968 command_runner.go:130] > [Install]
	I0910 20:01:21.773915    8968 command_runner.go:130] > WantedBy=multi-user.target
	I0910 20:01:21.783017    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 20:01:21.814834    8968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 20:01:21.861533    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 20:01:21.895115    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 20:01:21.929737    8968 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 20:01:21.986989    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 20:01:22.009786    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 20:01:22.044338    8968 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0910 20:01:22.052447    8968 ssh_runner.go:195] Run: which cri-dockerd
	I0910 20:01:22.060048    8968 command_runner.go:130] > /usr/bin/cri-dockerd
	I0910 20:01:22.068564    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 20:01:22.087692    8968 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 20:01:22.130748    8968 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 20:01:22.326041    8968 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 20:01:22.510450    8968 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 20:01:22.510682    8968 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 20:01:22.552534    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 20:01:22.733732    8968 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 20:01:25.383537    8968 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.649626s)
	I0910 20:01:25.394801    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 20:01:25.428389    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 20:01:25.461351    8968 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 20:01:25.658622    8968 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 20:01:25.853479    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 20:01:26.039464    8968 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 20:01:26.080353    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 20:01:26.111225    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 20:01:26.301028    8968 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 20:01:26.405692    8968 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 20:01:26.416235    8968 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 20:01:26.425182    8968 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0910 20:01:26.425258    8968 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0910 20:01:26.425258    8968 command_runner.go:130] > Device: 0,22	Inode: 851         Links: 1
	I0910 20:01:26.425258    8968 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0910 20:01:26.425258    8968 command_runner.go:130] > Access: 2024-09-10 20:01:26.551452255 +0000
	I0910 20:01:26.425306    8968 command_runner.go:130] > Modify: 2024-09-10 20:01:26.551452255 +0000
	I0910 20:01:26.425306    8968 command_runner.go:130] > Change: 2024-09-10 20:01:26.555452503 +0000
	I0910 20:01:26.425306    8968 command_runner.go:130] >  Birth: -
	I0910 20:01:26.425373    8968 start.go:563] Will wait 60s for crictl version
	I0910 20:01:26.434411    8968 ssh_runner.go:195] Run: which crictl
	I0910 20:01:26.440547    8968 command_runner.go:130] > /usr/bin/crictl
	I0910 20:01:26.448761    8968 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 20:01:26.506672    8968 command_runner.go:130] > Version:  0.1.0
	I0910 20:01:26.506672    8968 command_runner.go:130] > RuntimeName:  docker
	I0910 20:01:26.506672    8968 command_runner.go:130] > RuntimeVersion:  27.2.0
	I0910 20:01:26.506672    8968 command_runner.go:130] > RuntimeApiVersion:  v1
	I0910 20:01:26.506672    8968 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0910 20:01:26.514848    8968 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 20:01:26.542515    8968 command_runner.go:130] > 27.2.0
	I0910 20:01:26.554585    8968 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 20:01:26.583005    8968 command_runner.go:130] > 27.2.0
	I0910 20:01:26.588672    8968 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0910 20:01:26.590691    8968 out.go:177]   - env NO_PROXY=172.31.215.172
	I0910 20:01:26.592929    8968 out.go:177]   - env NO_PROXY=172.31.215.172,172.31.210.34
	I0910 20:01:26.595533    8968 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0910 20:01:26.599058    8968 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0910 20:01:26.599058    8968 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0910 20:01:26.599058    8968 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0910 20:01:26.599058    8968 ip.go:211] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:36:e6 Flags:up|broadcast|multicast|running}
	I0910 20:01:26.601710    8968 ip.go:214] interface addr: fe80::bc85:cd58:cadb:2533/64
	I0910 20:01:26.601710    8968 ip.go:214] interface addr: 172.31.208.1/20
	I0910 20:01:26.611962    8968 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0910 20:01:26.618162    8968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 20:01:26.638790    8968 mustload.go:65] Loading cluster: multinode-629100
	I0910 20:01:26.639513    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 20:01:26.639951    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 20:01:28.542619    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:28.542619    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:28.542619    8968 host.go:66] Checking if "multinode-629100" exists ...
	I0910 20:01:28.544104    8968 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100 for IP: 172.31.214.220
	I0910 20:01:28.544104    8968 certs.go:194] generating shared ca certs ...
	I0910 20:01:28.544189    8968 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 20:01:28.544684    8968 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0910 20:01:28.544962    8968 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0910 20:01:28.545125    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 20:01:28.545366    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0910 20:01:28.545525    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 20:01:28.545664    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 20:01:28.546109    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem (1338 bytes)
	W0910 20:01:28.546373    8968 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724_empty.pem, impossibly tiny 0 bytes
	I0910 20:01:28.546556    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0910 20:01:28.546736    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0910 20:01:28.546958    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0910 20:01:28.546958    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0910 20:01:28.547594    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem (1708 bytes)
	I0910 20:01:28.547874    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /usr/share/ca-certificates/47242.pem
	I0910 20:01:28.548018    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 20:01:28.548147    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem -> /usr/share/ca-certificates/4724.pem
	I0910 20:01:28.548374    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 20:01:28.595999    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0910 20:01:28.641867    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 20:01:28.692971    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 20:01:28.747797    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /usr/share/ca-certificates/47242.pem (1708 bytes)
	I0910 20:01:28.791833    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 20:01:28.837087    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem --> /usr/share/ca-certificates/4724.pem (1338 bytes)
	I0910 20:01:28.903078    8968 ssh_runner.go:195] Run: openssl version
	I0910 20:01:28.911038    8968 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0910 20:01:28.919798    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 20:01:28.950357    8968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 20:01:28.958007    8968 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 20:01:28.958007    8968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 20:01:28.966909    8968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 20:01:28.975038    8968 command_runner.go:130] > b5213941
	I0910 20:01:28.983186    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 20:01:29.011274    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4724.pem && ln -fs /usr/share/ca-certificates/4724.pem /etc/ssl/certs/4724.pem"
	I0910 20:01:29.038463    8968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4724.pem
	I0910 20:01:29.045528    8968 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 20:01:29.046156    8968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 20:01:29.056089    8968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4724.pem
	I0910 20:01:29.071010    8968 command_runner.go:130] > 51391683
	I0910 20:01:29.079135    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4724.pem /etc/ssl/certs/51391683.0"
	I0910 20:01:29.106730    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47242.pem && ln -fs /usr/share/ca-certificates/47242.pem /etc/ssl/certs/47242.pem"
	I0910 20:01:29.134716    8968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47242.pem
	I0910 20:01:29.141563    8968 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 20:01:29.141563    8968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 20:01:29.150035    8968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47242.pem
	I0910 20:01:29.157622    8968 command_runner.go:130] > 3ec20f2e
	I0910 20:01:29.168696    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47242.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 20:01:29.194767    8968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 20:01:29.201113    8968 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 20:01:29.201602    8968 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 20:01:29.201832    8968 kubeadm.go:934] updating node {m03 172.31.214.220 0 v1.31.0  false true} ...
	I0910 20:01:29.201832    8968 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-629100-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.214.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 20:01:29.210168    8968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 20:01:29.228517    8968 command_runner.go:130] > kubeadm
	I0910 20:01:29.228713    8968 command_runner.go:130] > kubectl
	I0910 20:01:29.228713    8968 command_runner.go:130] > kubelet
	I0910 20:01:29.228713    8968 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 20:01:29.238279    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0910 20:01:29.255440    8968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0910 20:01:29.285564    8968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 20:01:29.324648    8968 ssh_runner.go:195] Run: grep 172.31.215.172	control-plane.minikube.internal$ /etc/hosts
	I0910 20:01:29.330462    8968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.215.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 20:01:29.360834    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 20:01:29.564245    8968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 20:01:29.590607    8968 host.go:66] Checking if "multinode-629100" exists ...
	I0910 20:01:29.590926    8968 start.go:317] joinCluster: &{Name:multinode-629100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.215.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.210.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.214.220 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 20:01:29.590926    8968 start.go:330] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:172.31.214.220 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0910 20:01:29.590926    8968 host.go:66] Checking if "multinode-629100-m03" exists ...
	I0910 20:01:29.591848    8968 mustload.go:65] Loading cluster: multinode-629100
	I0910 20:01:29.591848    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 20:01:29.592684    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 20:01:31.535953    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:31.535953    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:31.536167    8968 host.go:66] Checking if "multinode-629100" exists ...
	I0910 20:01:31.536976    8968 api_server.go:166] Checking apiserver status ...
	I0910 20:01:31.547683    8968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 20:01:31.547683    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 20:01:33.447093    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:33.447093    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:33.447093    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 20:01:35.683777    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 20:01:35.683777    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:35.684054    8968 sshutil.go:53] new ssh client: &{IP:172.31.215.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 20:01:35.798404    8968 command_runner.go:130] > 1954
	I0910 20:01:35.798457    8968 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.2504859s)
	I0910 20:01:35.807534    8968 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1954/cgroup
	W0910 20:01:35.824175    8968 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1954/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 20:01:35.833416    8968 ssh_runner.go:195] Run: ls
	I0910 20:01:35.839414    8968 api_server.go:253] Checking apiserver healthz at https://172.31.215.172:8443/healthz ...
	I0910 20:01:35.846929    8968 api_server.go:279] https://172.31.215.172:8443/healthz returned 200:
	ok
	I0910 20:01:35.855169    8968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl drain multinode-629100-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0910 20:01:36.012788    8968 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-6tdpv, kube-system/kube-proxy-4tzx6
	I0910 20:01:36.015537    8968 command_runner.go:130] > node/multinode-629100-m03 cordoned
	I0910 20:01:36.015537    8968 command_runner.go:130] > node/multinode-629100-m03 drained
	I0910 20:01:36.015537    8968 node.go:128] successfully drained node "multinode-629100-m03"
	I0910 20:01:36.015537    8968 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0910 20:01:36.015537    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:01:37.881650    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:37.881650    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:37.882660    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:01:40.126469    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:01:40.127529    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:40.127529    8968 sshutil.go:53] new ssh client: &{IP:172.31.214.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m03\id_rsa Username:docker}
	I0910 20:01:40.525398    8968 command_runner.go:130] > [preflight] Running pre-flight checks
	I0910 20:01:40.525658    8968 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0910 20:01:40.526648    8968 command_runner.go:130] > [reset] Stopping the kubelet service
	I0910 20:01:40.541887    8968 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0910 20:01:40.700008    8968 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0910 20:01:40.714826    8968 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0910 20:01:40.714826    8968 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0910 20:01:40.714826    8968 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0910 20:01:40.715804    8968 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0910 20:01:40.715804    8968 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0910 20:01:40.715804    8968 command_runner.go:130] > to reset your system's IPVS tables.
	I0910 20:01:40.715804    8968 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0910 20:01:40.715804    8968 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0910 20:01:40.718024    8968 command_runner.go:130] ! W0910 20:01:40.751622    1580 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0910 20:01:40.718463    8968 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (4.702608s)
	I0910 20:01:40.718463    8968 node.go:155] successfully reset node "multinode-629100-m03"
	I0910 20:01:40.719519    8968 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 20:01:40.720091    8968 kapi.go:59] client config for multinode-629100: &rest.Config{Host:"https://172.31.215.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 20:01:40.720682    8968 request.go:1351] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0910 20:01:40.720682    8968 round_trippers.go:463] DELETE https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:40.720682    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:40.720682    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:40.720682    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:40.720682    8968 round_trippers.go:473]     Content-Type: application/json
	I0910 20:01:40.744773    8968 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0910 20:01:40.745228    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:40.745228    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:40.745228    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:40.745228    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:40.745228    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:40.745228    8968 round_trippers.go:580]     Content-Length: 171
	I0910 20:01:40.745228    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:40 GMT
	I0910 20:01:40.745228    8968 round_trippers.go:580]     Audit-Id: 0cce042d-7aa0-4e7b-8f8a-3110f38e7066
	I0910 20:01:40.745339    8968 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-629100-m03","kind":"nodes","uid":"39f4665c-e276-4967-a647-46b3a9a1c67a"}}
	I0910 20:01:40.745444    8968 node.go:180] successfully deleted node "multinode-629100-m03"
	I0910 20:01:40.745444    8968 start.go:334] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:172.31.214.220 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0910 20:01:40.745522    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0910 20:01:40.745522    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 20:01:42.576860    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:42.576860    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:42.576935    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 20:01:44.805167    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 20:01:44.805756    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:44.805756    8968 sshutil.go:53] new ssh client: &{IP:172.31.215.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 20:01:44.981025    8968 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ojo1z0.33u67m7cn1zzquvz --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b 
	I0910 20:01:44.981166    8968 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.2352974s)
	I0910 20:01:44.981201    8968 start.go:343] trying to join worker node "m03" to cluster: &{Name:m03 IP:172.31.214.220 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0910 20:01:44.981277    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ojo1z0.33u67m7cn1zzquvz --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-629100-m03"
	I0910 20:01:45.147342    8968 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 20:01:46.490838    8968 command_runner.go:130] > [preflight] Running pre-flight checks
	I0910 20:01:46.490924    8968 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0910 20:01:46.490993    8968 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0910 20:01:46.491078    8968 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 20:01:46.491194    8968 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 20:01:46.491194    8968 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0910 20:01:46.491194    8968 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 20:01:46.491281    8968 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002263974s
	I0910 20:01:46.491308    8968 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0910 20:01:46.491372    8968 command_runner.go:130] > This node has joined the cluster:
	I0910 20:01:46.491431    8968 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0910 20:01:46.491431    8968 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0910 20:01:46.491532    8968 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0910 20:01:46.491532    8968 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ojo1z0.33u67m7cn1zzquvz --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-629100-m03": (1.5101533s)
	I0910 20:01:46.491532    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0910 20:01:46.688776    8968 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0910 20:01:46.882906    8968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-629100-m03 minikube.k8s.io/updated_at=2024_09_10T20_01_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=multinode-629100 minikube.k8s.io/primary=false
	I0910 20:01:47.010780    8968 command_runner.go:130] > node/multinode-629100-m03 labeled
	I0910 20:01:47.011183    8968 start.go:319] duration metric: took 17.4190796s to joinCluster
	I0910 20:01:47.011334    8968 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.31.214.220 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0910 20:01:47.011471    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 20:01:47.014855    8968 out.go:177] * Verifying Kubernetes components...
	I0910 20:01:47.026004    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 20:01:47.208180    8968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 20:01:47.232814    8968 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 20:01:47.233302    8968 kapi.go:59] client config for multinode-629100: &rest.Config{Host:"https://172.31.215.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 20:01:47.233975    8968 node_ready.go:35] waiting up to 6m0s for node "multinode-629100-m03" to be "Ready" ...
	I0910 20:01:47.233975    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:47.233975    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:47.233975    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:47.233975    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:47.238356    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:47.238424    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:47.238424    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:47.238424    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:47 GMT
	I0910 20:01:47.238424    8968 round_trippers.go:580]     Audit-Id: 018d3cc2-37ef-4ba0-80da-dc5fc6889d89
	I0910 20:01:47.238424    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:47.238424    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:47.238424    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:47.238680    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2143","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3391 chars]
	I0910 20:01:47.748289    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:47.748526    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:47.748526    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:47.748526    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:47.751896    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:01:47.751896    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:47.751896    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:47.751896    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:47 GMT
	I0910 20:01:47.751896    8968 round_trippers.go:580]     Audit-Id: e386fa60-5fe5-4216-8c5a-fe5e5d3c9cbf
	I0910 20:01:47.751896    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:47.751896    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:47.751896    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:47.752039    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2143","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3391 chars]
	I0910 20:01:48.248683    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:48.248683    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:48.248683    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:48.248683    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:48.261494    8968 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0910 20:01:48.261666    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:48.261666    8968 round_trippers.go:580]     Audit-Id: 3b4e6c02-4a3a-4e99-a8db-2554402d0034
	I0910 20:01:48.261666    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:48.261666    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:48.261666    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:48.261666    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:48.261756    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:48 GMT
	I0910 20:01:48.261905    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2143","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3391 chars]
	I0910 20:01:48.738987    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:48.739076    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:48.739076    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:48.739076    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:48.741534    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:01:48.741534    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:48.741534    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:48.741534    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:48.741534    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:48 GMT
	I0910 20:01:48.741534    8968 round_trippers.go:580]     Audit-Id: 7d9d8f1a-35aa-4d4c-a71b-63d80b3ecda6
	I0910 20:01:48.741534    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:48.741534    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:48.742197    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2143","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3391 chars]
	I0910 20:01:49.250080    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:49.250157    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:49.250157    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:49.250157    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:49.253352    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:49.253352    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:49.253352    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:49.253728    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:49 GMT
	I0910 20:01:49.253728    8968 round_trippers.go:580]     Audit-Id: 33840c83-4c48-4610-ad8d-14a280266c5b
	I0910 20:01:49.253728    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:49.253728    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:49.253728    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:49.253959    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2143","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3391 chars]
	I0910 20:01:49.254532    8968 node_ready.go:53] node "multinode-629100-m03" has status "Ready":"False"
	I0910 20:01:49.742775    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:49.742775    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:49.742775    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:49.742775    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:49.745778    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:49.745778    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:49.745778    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:49.745778    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:49.745778    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:49.745778    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:49.746092    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:49 GMT
	I0910 20:01:49.746092    8968 round_trippers.go:580]     Audit-Id: efd89449-f9df-4685-acf0-7a4f959f5155
	I0910 20:01:49.746187    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2143","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3391 chars]
	I0910 20:01:50.248140    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:50.248169    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:50.248211    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:50.248240    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:50.250593    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:01:50.250593    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:50.250593    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:50.251283    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:50.251283    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:50 GMT
	I0910 20:01:50.251283    8968 round_trippers.go:580]     Audit-Id: 1ff81fab-0d7d-481a-9f12-516979946b90
	I0910 20:01:50.251283    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:50.251283    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:50.251622    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:50.735945    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:50.735945    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:50.735945    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:50.735945    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:50.739471    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:01:50.739471    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:50.739471    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:50.739574    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:50.739596    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:50.739596    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:50 GMT
	I0910 20:01:50.739596    8968 round_trippers.go:580]     Audit-Id: 08280fbf-3cc0-419f-8a79-b38e1591672a
	I0910 20:01:50.739596    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:50.739848    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:51.237652    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:51.237652    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:51.237652    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:51.237652    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:51.244056    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 20:01:51.244056    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:51.244056    8968 round_trippers.go:580]     Audit-Id: 78353392-a695-4315-aa33-47f32baa3eea
	I0910 20:01:51.244056    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:51.244056    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:51.244056    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:51.244056    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:51.244056    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:51 GMT
	I0910 20:01:51.244056    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:51.735812    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:51.735812    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:51.735812    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:51.735812    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:51.740859    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 20:01:51.741414    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:51.741414    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:51.741414    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:51.741414    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:51.741414    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:51 GMT
	I0910 20:01:51.741476    8968 round_trippers.go:580]     Audit-Id: 648c12d1-68ed-4dc6-857e-50e068702256
	I0910 20:01:51.741476    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:51.741521    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:51.741521    8968 node_ready.go:53] node "multinode-629100-m03" has status "Ready":"False"
	I0910 20:01:52.249622    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:52.249622    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:52.249622    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:52.249622    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:52.252861    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:52.252861    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:52.253361    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:52.253361    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:52.253361    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:52 GMT
	I0910 20:01:52.253361    8968 round_trippers.go:580]     Audit-Id: f8b7d6fa-7fd0-4c99-b793-64f6a89c69d9
	I0910 20:01:52.253361    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:52.253361    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:52.253529    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:52.748805    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:52.748917    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:52.748917    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:52.748917    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:52.752436    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:52.752436    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:52.752849    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:52.752849    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:52.752849    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:52.752849    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:52 GMT
	I0910 20:01:52.752849    8968 round_trippers.go:580]     Audit-Id: 3b9ac0a0-3d08-4b21-9d4d-6db3c581c780
	I0910 20:01:52.752849    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:52.753148    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:53.236502    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:53.236502    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:53.236502    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:53.236502    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:53.240071    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:53.240071    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:53.240071    8968 round_trippers.go:580]     Audit-Id: c70fc9c1-c9e2-4bba-93bd-adc759f71448
	I0910 20:01:53.240071    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:53.240071    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:53.240071    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:53.240071    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:53.240071    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:53 GMT
	I0910 20:01:53.240601    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:53.736048    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:53.736251    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:53.736251    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:53.736251    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:53.740105    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:53.740165    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:53.740165    8968 round_trippers.go:580]     Audit-Id: 495730d4-cb25-4d83-8ae7-3f9c567e0249
	I0910 20:01:53.740165    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:53.740165    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:53.740165    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:53.740165    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:53.740165    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:53 GMT
	I0910 20:01:53.740259    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:54.237440    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:54.237440    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:54.237440    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:54.237440    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:54.240804    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:54.240804    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:54.240804    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:54.240804    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:54.241423    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:54.241423    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:54 GMT
	I0910 20:01:54.241423    8968 round_trippers.go:580]     Audit-Id: c149fceb-b484-4349-ac0f-161f9f3f1c39
	I0910 20:01:54.241423    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:54.241682    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:54.241682    8968 node_ready.go:53] node "multinode-629100-m03" has status "Ready":"False"
	I0910 20:01:54.738048    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:54.738048    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:54.738134    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:54.738134    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:54.741525    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:54.742159    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:54.742159    8968 round_trippers.go:580]     Audit-Id: b27bed68-b101-4074-ba87-98f124603c28
	I0910 20:01:54.742159    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:54.742159    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:54.742159    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:54.742159    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:54.742270    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:54 GMT
	I0910 20:01:54.742333    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:55.235921    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:55.236129    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:55.236129    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:55.236129    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:55.238809    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:01:55.238809    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:55.238809    8968 round_trippers.go:580]     Audit-Id: f2e1709d-1df3-49a1-bdb5-ea1919869c04
	I0910 20:01:55.239779    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:55.239779    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:55.239779    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:55.239779    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:55.239779    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:55 GMT
	I0910 20:01:55.239813    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:55.737965    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:55.738035    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:55.738035    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:55.738035    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:55.742290    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 20:01:55.742469    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:55.742469    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:55 GMT
	I0910 20:01:55.742469    8968 round_trippers.go:580]     Audit-Id: 2efdaae7-ac52-4f4f-95fe-379b4b211af9
	I0910 20:01:55.742469    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:55.742469    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:55.742469    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:55.742554    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:55.742712    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:56.242501    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:56.242501    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:56.242501    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:56.242501    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:56.246924    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:56.247006    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:56.247006    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:56.247006    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:56 GMT
	I0910 20:01:56.247006    8968 round_trippers.go:580]     Audit-Id: f9da7e8c-bb9c-45de-9845-aebd2a06ba58
	I0910 20:01:56.247093    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:56.247093    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:56.247130    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:56.247158    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:56.247847    8968 node_ready.go:53] node "multinode-629100-m03" has status "Ready":"False"
	I0910 20:01:56.742861    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:56.742940    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:56.742940    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:56.742940    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:56.746314    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:56.746314    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:56.746504    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:56 GMT
	I0910 20:01:56.746504    8968 round_trippers.go:580]     Audit-Id: 52b20db3-6927-4475-8b98-ef6f1cf2c16d
	I0910 20:01:56.746504    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:56.746504    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:56.746504    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:56.746504    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:56.746816    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2168","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3892 chars]
	I0910 20:01:57.243079    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:57.243186    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:57.243186    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:57.243186    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:57.246611    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:57.246611    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:57.246611    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:57 GMT
	I0910 20:01:57.246611    8968 round_trippers.go:580]     Audit-Id: 16c25b59-8f64-4843-b584-7513c881f1fb
	I0910 20:01:57.247301    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:57.247301    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:57.247301    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:57.247301    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:57.247473    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2168","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3892 chars]
	I0910 20:01:57.742934    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:57.743012    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:57.743012    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:57.743012    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:57.745895    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:01:57.745895    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:57.745895    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:57.745998    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:57.745998    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:57 GMT
	I0910 20:01:57.745998    8968 round_trippers.go:580]     Audit-Id: 3ab5b564-4a02-4089-bc02-91d4d2178d27
	I0910 20:01:57.745998    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:57.745998    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:57.746195    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2168","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3892 chars]
	I0910 20:01:58.240249    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:58.240249    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:58.240249    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:58.240249    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:58.243615    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:58.243615    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:58.243615    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:58.243615    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:58.243615    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:58.243615    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:58 GMT
	I0910 20:01:58.243615    8968 round_trippers.go:580]     Audit-Id: a7ad6bd5-1feb-4dca-b78c-61625a050af5
	I0910 20:01:58.243615    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:58.244542    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2168","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3892 chars]
	I0910 20:01:58.740841    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:58.741249    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:58.741249    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:58.741249    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:58.744698    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:58.744698    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:58.744698    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:58.744698    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:58 GMT
	I0910 20:01:58.744698    8968 round_trippers.go:580]     Audit-Id: 155da96f-b9e6-4378-ba75-09855b57b3a9
	I0910 20:01:58.744698    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:58.744698    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:58.744698    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:58.745278    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2168","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3892 chars]
	I0910 20:01:58.745596    8968 node_ready.go:53] node "multinode-629100-m03" has status "Ready":"False"
	I0910 20:01:59.244731    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:59.244731    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:59.244731    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:59.244731    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:59.247200    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:01:59.247200    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:59.247200    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:59.247200    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:59.247200    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:59.247200    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:59 GMT
	I0910 20:01:59.247200    8968 round_trippers.go:580]     Audit-Id: 6ce737b8-278d-4e2c-880f-82e136275b73
	I0910 20:01:59.247200    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:59.248287    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2168","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3892 chars]
	I0910 20:01:59.745064    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:59.745140    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:59.745140    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:59.745216    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:59.747349    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:01:59.747349    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:59.747349    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:59.747349    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:59.748273    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:59 GMT
	I0910 20:01:59.748273    8968 round_trippers.go:580]     Audit-Id: c3dc3c0b-1018-47c5-9775-2b3daf772078
	I0910 20:01:59.748273    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:59.748273    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:59.748394    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2168","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3892 chars]
	I0910 20:02:00.241361    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:02:00.241361    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:00.241361    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:00.241361    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:00.247392    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 20:02:00.247392    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:00.247392    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:00.247392    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:00.247392    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:00.247588    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:00.247588    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:00 GMT
	I0910 20:02:00.247588    8968 round_trippers.go:580]     Audit-Id: 69ff8858-1b70-49d5-b25f-f85eba5a23c8
	I0910 20:02:00.247758    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2168","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3892 chars]
	I0910 20:02:00.740360    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:02:00.740475    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:00.740475    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:00.740475    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:00.744216    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:02:00.745115    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:00.745115    8968 round_trippers.go:580]     Audit-Id: 63d29ffe-0799-4e53-bef5-32999527a116
	I0910 20:02:00.745115    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:00.745115    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:00.745115    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:00.745115    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:00.745115    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:00 GMT
	I0910 20:02:00.745347    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2168","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3892 chars]
	I0910 20:02:00.745825    8968 node_ready.go:53] node "multinode-629100-m03" has status "Ready":"False"
	I0910 20:02:01.237611    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:02:01.237611    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.237611    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.237611    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.241206    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:02:01.241206    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.241206    8968 round_trippers.go:580]     Audit-Id: bc862643-9a68-4ce5-991d-31ac9eba6974
	I0910 20:02:01.241206    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.241206    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.241206    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.241206    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.241206    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:01 GMT
	I0910 20:02:01.241737    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2168","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3892 chars]
	I0910 20:02:01.738080    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:02:01.738209    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.738209    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.738333    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.740988    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:02:01.740988    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.740988    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:01 GMT
	I0910 20:02:01.740988    8968 round_trippers.go:580]     Audit-Id: 860b8764-3677-4528-ae1b-3b460b3e99c2
	I0910 20:02:01.741745    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.741745    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.741785    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.741785    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.741899    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2176","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3758 chars]
	I0910 20:02:01.742427    8968 node_ready.go:49] node "multinode-629100-m03" has status "Ready":"True"
	I0910 20:02:01.742427    8968 node_ready.go:38] duration metric: took 14.5074711s for node "multinode-629100-m03" to be "Ready" ...
	I0910 20:02:01.742427    8968 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 20:02:01.742848    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods
	I0910 20:02:01.742936    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.742936    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.742936    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.747160    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 20:02:01.747160    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.747160    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.747160    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.747160    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:01 GMT
	I0910 20:02:01.747160    8968 round_trippers.go:580]     Audit-Id: 2c1a0558-5188-425a-a3f7-5addf35dda44
	I0910 20:02:01.747160    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.747160    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.748409    8968 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2176"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1803","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89119 chars]
	I0910 20:02:01.751409    8968 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:01.751409    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 20:02:01.751409    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.752430    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.752430    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.755100    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:02:01.755100    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.755100    8968 round_trippers.go:580]     Audit-Id: 40f981fd-15cc-4246-b583-f03ad3dc5598
	I0910 20:02:01.755100    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.755100    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.755100    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.755100    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.755100    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:01.755476    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1803","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7046 chars]
	I0910 20:02:01.756097    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 20:02:01.756097    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.756176    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.756176    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.758725    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:02:01.758958    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.758958    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.758958    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.758958    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.758958    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.758958    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:01.758958    8968 round_trippers.go:580]     Audit-Id: 0b8db9d3-1caa-4aab-b233-c3dec45379f9
	I0910 20:02:01.758958    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 20:02:01.759580    8968 pod_ready.go:93] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"True"
	I0910 20:02:01.759616    8968 pod_ready.go:82] duration metric: took 8.2061ms for pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:01.759616    8968 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:01.759742    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-629100
	I0910 20:02:01.759742    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.759742    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.759742    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.762019    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:02:01.762593    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.762593    8968 round_trippers.go:580]     Audit-Id: def92302-fb91-42e1-80bd-2835397395ac
	I0910 20:02:01.762593    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.762593    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.762593    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.762593    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.762593    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:01.762790    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-629100","namespace":"kube-system","uid":"2e6b0a6d-5705-427c-89e2-ff2b3ca25cb6","resourceVersion":"1766","creationTimestamp":"2024-09-10T19:56:47Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.215.172:2379","kubernetes.io/config.hash":"fe5e551808132ccc0e55277539c9741f","kubernetes.io/config.mirror":"fe5e551808132ccc0e55277539c9741f","kubernetes.io/config.seen":"2024-09-10T19:56:41.600229288Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:56:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6617 chars]
	I0910 20:02:01.763309    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 20:02:01.763309    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.763309    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.763309    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.765257    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 20:02:01.765257    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.765257    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.765257    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.765257    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.765257    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:01.765257    8968 round_trippers.go:580]     Audit-Id: 0eb77c63-feab-4ae7-adc5-707c1e7cdc87
	I0910 20:02:01.766515    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.766515    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 20:02:01.766515    8968 pod_ready.go:93] pod "etcd-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 20:02:01.767116    8968 pod_ready.go:82] duration metric: took 7.4997ms for pod "etcd-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:01.767116    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:01.767116    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-629100
	I0910 20:02:01.767116    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.767116    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.767116    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.769821    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 20:02:01.769821    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.769821    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.769821    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:01.769821    8968 round_trippers.go:580]     Audit-Id: a19d464e-9843-4e8f-a0c6-765d42f22d35
	I0910 20:02:01.769821    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.769821    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.769821    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.769821    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-629100","namespace":"kube-system","uid":"5d262302-6bdf-4e67-bf9f-4b50d8f9fbd5","resourceVersion":"1763","creationTimestamp":"2024-09-10T19:56:47Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.215.172:8443","kubernetes.io/config.hash":"0a34f2a2bc072815a118e734662d14b6","kubernetes.io/config.mirror":"0a34f2a2bc072815a118e734662d14b6","kubernetes.io/config.seen":"2024-09-10T19:56:41.591483300Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:56:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8049 chars]
	I0910 20:02:01.769821    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 20:02:01.769821    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.769821    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.769821    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.773796    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:02:01.774007    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.774007    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.774007    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.774007    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.774007    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:01.774007    8968 round_trippers.go:580]     Audit-Id: bef49463-76ca-49dd-9475-56afdbb80fbd
	I0910 20:02:01.774007    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.774007    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 20:02:01.774007    8968 pod_ready.go:93] pod "kube-apiserver-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 20:02:01.774007    8968 pod_ready.go:82] duration metric: took 6.8905ms for pod "kube-apiserver-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:01.774007    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:01.774549    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-629100
	I0910 20:02:01.774634    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.774634    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.774634    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.777275    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 20:02:01.777275    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.777275    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.777275    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.777275    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:01.777275    8968 round_trippers.go:580]     Audit-Id: ad54ebea-f749-4d27-9cb9-621647aef2f3
	I0910 20:02:01.777275    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.777275    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.777592    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-629100","namespace":"kube-system","uid":"60adcb0c-808a-477c-9432-83dc8f96d6c0","resourceVersion":"1770","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2a7614069b6cbf30e1abf0d1e34a8a71","kubernetes.io/config.mirror":"2a7614069b6cbf30e1abf0d1e34a8a71","kubernetes.io/config.seen":"2024-09-10T19:35:40.972009282Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0910 20:02:01.778070    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 20:02:01.778131    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.778131    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.778131    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.780591    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:02:01.780693    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.780693    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.780693    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.780693    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.780693    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:01.780693    8968 round_trippers.go:580]     Audit-Id: f35b7e4c-b163-49b7-b34a-a98fbf9aac5e
	I0910 20:02:01.780693    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.780693    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 20:02:01.780693    8968 pod_ready.go:93] pod "kube-controller-manager-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 20:02:01.781226    8968 pod_ready.go:82] duration metric: took 7.2185ms for pod "kube-controller-manager-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:01.781226    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4tzx6" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:01.942249    8968 request.go:632] Waited for 160.6776ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tzx6
	I0910 20:02:01.942465    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tzx6
	I0910 20:02:01.942465    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.942465    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.942465    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.946031    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:02:01.946031    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.946031    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.946031    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.946031    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.946031    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:01.946031    8968 round_trippers.go:580]     Audit-Id: 533681df-f67e-4d0c-80f6-22391951fab4
	I0910 20:02:01.946031    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.946774    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4tzx6","generateName":"kube-proxy-","namespace":"kube-system","uid":"9bb18c28-3ee9-4028-a61d-3d7f6ea31894","resourceVersion":"2153","creationTimestamp":"2024-09-10T19:42:55Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:42:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6208 chars]
	I0910 20:02:02.145611    8968 request.go:632] Waited for 198.2393ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:02:02.145611    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:02:02.145611    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:02.145611    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:02.145611    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:02.149496    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:02:02.149496    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:02.149496    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:02.149496    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:02.149997    8968 round_trippers.go:580]     Audit-Id: 93f9ef4e-60e0-40f9-9c7e-54022aaa5d48
	I0910 20:02:02.149997    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:02.149997    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:02.149997    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:02.150208    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2176","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3758 chars]
	I0910 20:02:02.150586    8968 pod_ready.go:93] pod "kube-proxy-4tzx6" in "kube-system" namespace has status "Ready":"True"
	I0910 20:02:02.150586    8968 pod_ready.go:82] duration metric: took 369.2896ms for pod "kube-proxy-4tzx6" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:02.150586    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qqrrg" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:02.348481    8968 request.go:632] Waited for 197.6659ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqrrg
	I0910 20:02:02.348605    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqrrg
	I0910 20:02:02.348605    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:02.348672    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:02.348672    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:02.352250    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:02:02.352250    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:02.352250    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:02.352250    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:02.352250    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:02.352250    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:02.352250    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:02.352250    8968 round_trippers.go:580]     Audit-Id: e0b34baf-3f65-40ab-aa1c-925d9c07e495
	I0910 20:02:02.353298    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qqrrg","generateName":"kube-proxy-","namespace":"kube-system","uid":"1fc7fdda-d5e4-4c72-96c1-2348eb72b491","resourceVersion":"1960","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6203 chars]
	I0910 20:02:02.552537    8968 request.go:632] Waited for 198.711ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 20:02:02.552934    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 20:02:02.552934    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:02.552934    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:02.552934    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:02.561546    8968 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 20:02:02.561546    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:02.561546    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:02.561546    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:02.561546    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:02.561546    8968 round_trippers.go:580]     Audit-Id: 5e6cbac9-d0a3-4af0-a707-eda87c5226fb
	I0910 20:02:02.561546    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:02.561546    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:02.562506    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1995","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3806 chars]
	I0910 20:02:02.562506    8968 pod_ready.go:93] pod "kube-proxy-qqrrg" in "kube-system" namespace has status "Ready":"True"
	I0910 20:02:02.562506    8968 pod_ready.go:82] duration metric: took 411.8917ms for pod "kube-proxy-qqrrg" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:02.562506    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wqf2d" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:02.739110    8968 request.go:632] Waited for 176.5917ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wqf2d
	I0910 20:02:02.739110    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wqf2d
	I0910 20:02:02.739521    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:02.739521    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:02.739521    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:02.742823    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:02:02.743304    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:02.743304    8968 round_trippers.go:580]     Audit-Id: c56fbd43-2371-4c0f-b765-606ae9adbea0
	I0910 20:02:02.743304    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:02.743304    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:02.743304    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:02.743304    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:02.743304    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:02.743424    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wqf2d","generateName":"kube-proxy-","namespace":"kube-system","uid":"27e7846e-e506-48e0-96b9-351b3ca91703","resourceVersion":"1662","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6405 chars]
	I0910 20:02:02.943067    8968 request.go:632] Waited for 198.7271ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 20:02:02.943203    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 20:02:02.943250    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:02.943250    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:02.943250    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:02.949406    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 20:02:02.949406    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:02.949406    8968 round_trippers.go:580]     Audit-Id: 017cd4a9-05c2-408c-bb61-d8b0b55fe741
	I0910 20:02:02.949406    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:02.949406    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:02.949406    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:02.949406    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:02.949406    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:03 GMT
	I0910 20:02:02.950060    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 20:02:02.950060    8968 pod_ready.go:93] pod "kube-proxy-wqf2d" in "kube-system" namespace has status "Ready":"True"
	I0910 20:02:02.950060    8968 pod_ready.go:82] duration metric: took 387.5282ms for pod "kube-proxy-wqf2d" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:02.950060    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:03.147199    8968 request.go:632] Waited for 197.0294ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629100
	I0910 20:02:03.147493    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629100
	I0910 20:02:03.147493    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:03.147493    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:03.147622    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:03.151102    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:02:03.151102    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:03.151102    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:03.151102    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:03.151102    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:03 GMT
	I0910 20:02:03.151102    8968 round_trippers.go:580]     Audit-Id: 03637e95-c1ac-41d6-92b0-8cb038cb0cd0
	I0910 20:02:03.151102    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:03.151102    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:03.151348    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-629100","namespace":"kube-system","uid":"fe93d1a3-98c4-44c9-b1fb-9d90a97b97c7","resourceVersion":"1757","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4f54b0774708cd0189f8ef861ea931f0","kubernetes.io/config.mirror":"4f54b0774708cd0189f8ef861ea931f0","kubernetes.io/config.seen":"2024-09-10T19:35:40.972010383Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0910 20:02:03.350153    8968 request.go:632] Waited for 198.0549ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 20:02:03.350153    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 20:02:03.350350    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:03.350350    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:03.350350    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:03.353719    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:02:03.353719    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:03.353719    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:03.353719    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:03.353719    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:03.353955    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:03.353955    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:03 GMT
	I0910 20:02:03.353955    8968 round_trippers.go:580]     Audit-Id: d8763ea0-2c31-49bb-b78e-5d9871fcf628
	I0910 20:02:03.353955    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 20:02:03.354591    8968 pod_ready.go:93] pod "kube-scheduler-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 20:02:03.354591    8968 pod_ready.go:82] duration metric: took 404.5028ms for pod "kube-scheduler-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:03.354591    8968 pod_ready.go:39] duration metric: took 1.6117576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 20:02:03.354702    8968 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 20:02:03.363114    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 20:02:03.390754    8968 system_svc.go:56] duration metric: took 36.0494ms WaitForService to wait for kubelet
	I0910 20:02:03.390865    8968 kubeadm.go:582] duration metric: took 16.3783364s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 20:02:03.390865    8968 node_conditions.go:102] verifying NodePressure condition ...
	I0910 20:02:03.554628    8968 request.go:632] Waited for 163.3224ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes
	I0910 20:02:03.554628    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes
	I0910 20:02:03.554628    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:03.554628    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:03.554628    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:03.558687    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 20:02:03.558687    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:03.558687    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:03.558687    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:03.558687    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:03 GMT
	I0910 20:02:03.558687    8968 round_trippers.go:580]     Audit-Id: 177a5a79-ffcb-4c28-8074-bd26786395c8
	I0910 20:02:03.558687    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:03.558687    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:03.558687    8968 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2180"},"items":[{"metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"2180","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14833 chars]
	I0910 20:02:03.559808    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 20:02:03.559808    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 20:02:03.559808    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 20:02:03.559808    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 20:02:03.559808    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 20:02:03.559808    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 20:02:03.559808    8968 node_conditions.go:105] duration metric: took 168.932ms to run NodePressure ...
	I0910 20:02:03.559808    8968 start.go:241] waiting for startup goroutines ...
	I0910 20:02:03.559902    8968 start.go:255] writing updated cluster config ...
	I0910 20:02:03.568819    8968 ssh_runner.go:195] Run: rm -f paused
	I0910 20:02:03.687767    8968 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 20:02:03.691253    8968 out.go:177] * Done! kubectl is now configured to use "multinode-629100" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.044493596Z" level=warning msg="cleaning up after shim disconnected" id=d78644ad6da20ca56d201dd6cd44531fe1c8fee7f864b5340426d9b70599b073 namespace=moby
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.044502597Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.068856493Z" level=warning msg="cleanup warnings time=\"2024-09-10T19:57:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.841955713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.842886209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.842969817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.843252246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.851949137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.852000143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.852012044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.852090752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:57:19 multinode-629100 cri-dockerd[1352]: time="2024-09-10T19:57:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b31945a718c27b4c2824e7c953e0e3d304fbe757e13f3f697c3ba45e7a7d1b82/resolv.conf as [nameserver 172.31.208.1]"
	Sep 10 19:57:19 multinode-629100 cri-dockerd[1352]: time="2024-09-10T19:57:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/878e8a395dfe7f550c4ffccd43f2633e5c04bddac7fe6e3cee8bed5e38f92307/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 10 19:57:19 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:19.325943412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 19:57:19 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:19.326116230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 19:57:19 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:19.326134932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:57:19 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:19.326807799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:57:19 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:19.354305976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 19:57:19 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:19.354512096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 19:57:19 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:19.354613907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:57:19 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:19.354871733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:57:29 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:29.863756118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 19:57:29 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:29.863833428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 19:57:29 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:29.863852531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:57:29 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:29.865278216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0592e8e987e86       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       2                   1382ad57fdb76       storage-provisioner
	07fb60b2369e2       8c811b4aec35f                                                                                         5 minutes ago       Running             busybox                   1                   878e8a395dfe7       busybox-7dff88458-lzs87
	bba13e3979fe6       cbb01a7bd410d                                                                                         5 minutes ago       Running             coredns                   1                   b31945a718c27       coredns-6f6b679f8f-srtv8
	79ce262b775dd       12968670680f4                                                                                         5 minutes ago       Running             kindnet-cni               1                   c165f79ee0c66       kindnet-lj2v2
	8df28a487cc9c       ad83b2ca7b09e                                                                                         5 minutes ago       Running             kube-proxy                1                   4d2ed7f661678       kube-proxy-wqf2d
	d78644ad6da20       6e38f40d628db                                                                                         5 minutes ago       Exited              storage-provisioner       1                   1382ad57fdb76       storage-provisioner
	5371b75c6a4eb       2e96e5913fc06                                                                                         5 minutes ago       Running             etcd                      0                   f5952139dd10d       etcd-multinode-629100
	6c4b89f91c728       604f5db92eaa8                                                                                         5 minutes ago       Running             kube-apiserver            0                   8b41f5de76aa6       kube-apiserver-multinode-629100
	c6849e798f8b7       1766f54c897f0                                                                                         5 minutes ago       Running             kube-scheduler            1                   d862d30a973e0       kube-scheduler-multinode-629100
	1dc6a0b68f7be       045733566833c                                                                                         5 minutes ago       Running             kube-controller-manager   1                   af4deda9f3e84       kube-controller-manager-multinode-629100
	b1a88f7f52270       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   22 minutes ago      Exited              busybox                   0                   ea5e1070e7dea       busybox-7dff88458-lzs87
	039fd49f157a9       cbb01a7bd410d                                                                                         26 minutes ago      Exited              coredns                   0                   bf116f91589fc       coredns-6f6b679f8f-srtv8
	33f88ed7aee25       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              26 minutes ago      Exited              kindnet-cni               0                   1d92603202b00       kindnet-lj2v2
	85b03f4986715       ad83b2ca7b09e                                                                                         26 minutes ago      Exited              kube-proxy                0                   4e550827f00f7       kube-proxy-wqf2d
	5cb559fed2d8a       1766f54c897f0                                                                                         26 minutes ago      Exited              kube-scheduler            0                   49d9c6949234d       kube-scheduler-multinode-629100
	ea7220d439d1b       045733566833c                                                                                         26 minutes ago      Exited              kube-controller-manager   0                   db7037ca07a46       kube-controller-manager-multinode-629100
	
	
	==> coredns [039fd49f157a] <==
	[INFO] 10.244.0.3:49423 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000092007s
	[INFO] 10.244.0.3:43701 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095107s
	[INFO] 10.244.0.3:51536 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000048303s
	[INFO] 10.244.0.3:59362 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00028942s
	[INFO] 10.244.0.3:37417 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014271s
	[INFO] 10.244.0.3:50609 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077805s
	[INFO] 10.244.0.3:45492 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014891s
	[INFO] 10.244.1.2:47303 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100307s
	[INFO] 10.244.1.2:50959 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013871s
	[INFO] 10.244.1.2:34061 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064904s
	[INFO] 10.244.1.2:33504 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059204s
	[INFO] 10.244.0.3:44472 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014991s
	[INFO] 10.244.0.3:51126 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130209s
	[INFO] 10.244.0.3:35880 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062805s
	[INFO] 10.244.0.3:47290 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114308s
	[INFO] 10.244.1.2:59801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127909s
	[INFO] 10.244.1.2:44820 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105107s
	[INFO] 10.244.1.2:51097 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000169412s
	[INFO] 10.244.1.2:50721 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136709s
	[INFO] 10.244.0.3:48616 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000315622s
	[INFO] 10.244.0.3:45256 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000171611s
	[INFO] 10.244.0.3:51021 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000077906s
	[INFO] 10.244.0.3:42471 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135209s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bba13e3979fe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3bbd098fc214dc6dfa00c568b7eace025b603ea701d85ff6422fce82c71ce8b3031aaaf62adfe342d1a3f5f0bf1be6f08c4386d35c48cea8ace4e1727588bef9
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48913 - 35653 "HINFO IN 6807478851987409090.9100571777494782227. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.075144087s
	
	
	==> describe nodes <==
	Name:               multinode-629100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-629100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=multinode-629100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T19_35_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 19:35:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-629100
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 20:02:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 20:02:03 +0000   Tue, 10 Sep 2024 19:35:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 20:02:03 +0000   Tue, 10 Sep 2024 19:35:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 20:02:03 +0000   Tue, 10 Sep 2024 19:35:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 20:02:03 +0000   Tue, 10 Sep 2024 19:57:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.215.172
	  Hostname:    multinode-629100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d19cefb91db9497b816b4a43c361a0ab
	  System UUID:                e294be3b-926e-3f4f-8147-8c2e1d6d31e8
	  Boot ID:                    65691e6c-346f-4af5-abb6-c00142d61fbf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lzs87                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-6f6b679f8f-srtv8                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     26m
	  kube-system                 etcd-multinode-629100                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m35s
	  kube-system                 kindnet-lj2v2                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      26m
	  kube-system                 kube-apiserver-multinode-629100             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-controller-manager-multinode-629100    200m (10%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-proxy-wqf2d                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-scheduler-multinode-629100             100m (5%)     0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 26m                    kube-proxy       
	  Normal  Starting                 5m33s                  kube-proxy       
	  Normal  Starting                 26m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     26m                    kubelet          Node multinode-629100 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  26m                    kubelet          Node multinode-629100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26m                    kubelet          Node multinode-629100 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  26m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           26m                    node-controller  Node multinode-629100 event: Registered Node multinode-629100 in Controller
	  Normal  NodeReady                26m                    kubelet          Node multinode-629100 status is now: NodeReady
	  Normal  Starting                 5m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m41s (x8 over 5m41s)  kubelet          Node multinode-629100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m41s (x8 over 5m41s)  kubelet          Node multinode-629100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m41s (x7 over 5m41s)  kubelet          Node multinode-629100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m33s                  node-controller  Node multinode-629100 event: Registered Node multinode-629100 in Controller
	
	
	Name:               multinode-629100-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-629100-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=multinode-629100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T19_59_26_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 19:59:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-629100-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 20:02:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 19:59:42 +0000   Tue, 10 Sep 2024 19:59:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 19:59:42 +0000   Tue, 10 Sep 2024 19:59:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 19:59:42 +0000   Tue, 10 Sep 2024 19:59:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 19:59:42 +0000   Tue, 10 Sep 2024 19:59:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.210.34
	  Hostname:    multinode-629100-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf73bb2604364adb85232c437f2a54e5
	  System UUID:                0fc9d8ea-7869-bd42-95ee-012842e5540a
	  Boot ID:                    e782fa6a-1fc5-40ad-b179-77ffc1e8f660
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t8d6l    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m10s
	  kube-system                 kindnet-5crht              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	  kube-system                 kube-proxy-qqrrg           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m53s                  kube-proxy       
	  Normal  Starting                 23m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x2 over 23m)      kubelet          Node multinode-629100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x2 over 23m)      kubelet          Node multinode-629100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x2 over 23m)      kubelet          Node multinode-629100-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                23m                    kubelet          Node multinode-629100-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  2m57s (x2 over 2m57s)  kubelet          Node multinode-629100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m57s (x2 over 2m57s)  kubelet          Node multinode-629100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m57s (x2 over 2m57s)  kubelet          Node multinode-629100-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m52s                  node-controller  Node multinode-629100-m02 event: Registered Node multinode-629100-m02 in Controller
	  Normal  NodeReady                2m40s                  kubelet          Node multinode-629100-m02 status is now: NodeReady
	
	
	Name:               multinode-629100-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-629100-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=multinode-629100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T20_01_46_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 20:01:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-629100-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 20:02:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 20:02:01 +0000   Tue, 10 Sep 2024 20:01:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 20:02:01 +0000   Tue, 10 Sep 2024 20:01:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 20:02:01 +0000   Tue, 10 Sep 2024 20:01:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 20:02:01 +0000   Tue, 10 Sep 2024 20:02:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.214.220
	  Hostname:    multinode-629100-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d23f1ff9857046be89b402522869a516
	  System UUID:                706c2b98-3b56-344f-94d7-74a48fb097d3
	  Boot ID:                    e4b979d4-1a65-4168-8619-a4804df70d72
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6tdpv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-proxy-4tzx6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m49s                  kube-proxy       
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  Starting                 32s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)      kubelet          Node multinode-629100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)      kubelet          Node multinode-629100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)      kubelet          Node multinode-629100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                18m                    kubelet          Node multinode-629100-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  9m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     9m53s (x2 over 9m53s)  kubelet          Node multinode-629100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    9m53s (x2 over 9m53s)  kubelet          Node multinode-629100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  9m53s (x2 over 9m53s)  kubelet          Node multinode-629100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                9m37s                  kubelet          Node multinode-629100-m03 status is now: NodeReady
	  Normal  Starting                 36s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x2 over 36s)      kubelet          Node multinode-629100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x2 over 36s)      kubelet          Node multinode-629100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x2 over 36s)      kubelet          Node multinode-629100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  36s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           32s                    node-controller  Node multinode-629100-m03 event: Registered Node multinode-629100-m03 in Controller
	  Normal  NodeReady                21s                    kubelet          Node multinode-629100-m03 status is now: NodeReady
	
	
	==> dmesg <==
	              * this clock source is slow. Consider trying other clock sources
	[  +6.228148] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.707768] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.867413] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.281183] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep10 19:56] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.181344] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[ +23.772473] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +0.080755] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.467546] systemd-fstab-generator[1048]: Ignoring "noauto" option for root device
	[  +0.176877] systemd-fstab-generator[1060]: Ignoring "noauto" option for root device
	[  +0.208774] systemd-fstab-generator[1074]: Ignoring "noauto" option for root device
	[  +2.885128] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.162199] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[  +0.160750] systemd-fstab-generator[1329]: Ignoring "noauto" option for root device
	[  +0.237614] systemd-fstab-generator[1344]: Ignoring "noauto" option for root device
	[  +0.793443] systemd-fstab-generator[1473]: Ignoring "noauto" option for root device
	[  +0.074688] kauditd_printk_skb: 202 callbacks suppressed
	[  +2.886212] systemd-fstab-generator[1614]: Ignoring "noauto" option for root device
	[  +5.954658] kauditd_printk_skb: 84 callbacks suppressed
	[  +3.913261] systemd-fstab-generator[2466]: Ignoring "noauto" option for root device
	[Sep10 19:57] kauditd_printk_skb: 72 callbacks suppressed
	[ +11.857050] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [5371b75c6a4e] <==
	{"level":"info","ts":"2024-09-10T19:56:43.603537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d523ecf11423acf switched to configuration voters=(2112820234258889423)"}
	{"level":"info","ts":"2024-09-10T19:56:43.604076Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8392dc51522b279d","local-member-id":"1d523ecf11423acf","added-peer-id":"1d523ecf11423acf","added-peer-peer-urls":["https://172.31.210.71:2380"]}
	{"level":"info","ts":"2024-09-10T19:56:43.604575Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8392dc51522b279d","local-member-id":"1d523ecf11423acf","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T19:56:43.605627Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T19:56:43.623084Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-10T19:56:43.635867Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"172.31.215.172:2380"}
	{"level":"info","ts":"2024-09-10T19:56:43.641021Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"172.31.215.172:2380"}
	{"level":"info","ts":"2024-09-10T19:56:43.641152Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"1d523ecf11423acf","initial-advertise-peer-urls":["https://172.31.215.172:2380"],"listen-peer-urls":["https://172.31.215.172:2380"],"advertise-client-urls":["https://172.31.215.172:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.31.215.172:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-10T19:56:43.645591Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-10T19:56:45.149423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d523ecf11423acf is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-10T19:56:45.149537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d523ecf11423acf became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-10T19:56:45.149583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d523ecf11423acf received MsgPreVoteResp from 1d523ecf11423acf at term 2"}
	{"level":"info","ts":"2024-09-10T19:56:45.149596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d523ecf11423acf became candidate at term 3"}
	{"level":"info","ts":"2024-09-10T19:56:45.149606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d523ecf11423acf received MsgVoteResp from 1d523ecf11423acf at term 3"}
	{"level":"info","ts":"2024-09-10T19:56:45.149617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d523ecf11423acf became leader at term 3"}
	{"level":"info","ts":"2024-09-10T19:56:45.149637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1d523ecf11423acf elected leader 1d523ecf11423acf at term 3"}
	{"level":"info","ts":"2024-09-10T19:56:45.154855Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1d523ecf11423acf","local-member-attributes":"{Name:multinode-629100 ClientURLs:[https://172.31.215.172:2379]}","request-path":"/0/members/1d523ecf11423acf/attributes","cluster-id":"8392dc51522b279d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T19:56:45.154972Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T19:56:45.155651Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T19:56:45.155828Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-10T19:56:45.155951Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T19:56:45.158005Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T19:56:45.159101Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-10T19:56:45.158078Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T19:56:45.160906Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.31.215.172:2379"}
	
	
	==> kernel <==
	 20:02:22 up 7 min,  0 users,  load average: 0.36, 0.16, 0.07
	Linux multinode-629100 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [33f88ed7aee2] <==
	I0910 19:53:45.159789       1 main.go:322] Node multinode-629100-m03 has CIDR [10.244.3.0/24] 
	I0910 19:53:55.153125       1 main.go:295] Handling node with IPs: map[172.31.210.71:{}]
	I0910 19:53:55.153224       1 main.go:299] handling current node
	I0910 19:53:55.153249       1 main.go:295] Handling node with IPs: map[172.31.209.0:{}]
	I0910 19:53:55.153264       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	I0910 19:53:55.153453       1 main.go:295] Handling node with IPs: map[172.31.210.110:{}]
	I0910 19:53:55.153476       1 main.go:322] Node multinode-629100-m03 has CIDR [10.244.3.0/24] 
	I0910 19:54:05.152730       1 main.go:295] Handling node with IPs: map[172.31.210.71:{}]
	I0910 19:54:05.152911       1 main.go:299] handling current node
	I0910 19:54:05.152928       1 main.go:295] Handling node with IPs: map[172.31.209.0:{}]
	I0910 19:54:05.152947       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	I0910 19:54:05.153065       1 main.go:295] Handling node with IPs: map[172.31.210.110:{}]
	I0910 19:54:05.153236       1 main.go:322] Node multinode-629100-m03 has CIDR [10.244.3.0/24] 
	I0910 19:54:15.155625       1 main.go:295] Handling node with IPs: map[172.31.210.110:{}]
	I0910 19:54:15.155896       1 main.go:322] Node multinode-629100-m03 has CIDR [10.244.3.0/24] 
	I0910 19:54:15.156166       1 main.go:295] Handling node with IPs: map[172.31.210.71:{}]
	I0910 19:54:15.156260       1 main.go:299] handling current node
	I0910 19:54:15.156356       1 main.go:295] Handling node with IPs: map[172.31.209.0:{}]
	I0910 19:54:15.156465       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	I0910 19:54:25.160608       1 main.go:295] Handling node with IPs: map[172.31.210.110:{}]
	I0910 19:54:25.160926       1 main.go:322] Node multinode-629100-m03 has CIDR [10.244.3.0/24] 
	I0910 19:54:25.161180       1 main.go:295] Handling node with IPs: map[172.31.210.71:{}]
	I0910 19:54:25.161345       1 main.go:299] handling current node
	I0910 19:54:25.161445       1 main.go:295] Handling node with IPs: map[172.31.209.0:{}]
	I0910 19:54:25.161537       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [79ce262b775d] <==
	I0910 20:01:49.408328       1 main.go:295] Handling node with IPs: map[172.31.215.172:{}]
	I0910 20:01:49.408373       1 main.go:299] handling current node
	I0910 20:01:49.408389       1 main.go:295] Handling node with IPs: map[172.31.210.34:{}]
	I0910 20:01:49.408396       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	I0910 20:01:49.408790       1 main.go:295] Handling node with IPs: map[172.31.214.220:{}]
	I0910 20:01:49.408894       1 main.go:322] Node multinode-629100-m03 has CIDR [10.244.2.0/24] 
	I0910 20:01:49.408974       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.31.214.220 Flags: [] Table: 0} 
	I0910 20:01:59.408231       1 main.go:295] Handling node with IPs: map[172.31.215.172:{}]
	I0910 20:01:59.408276       1 main.go:299] handling current node
	I0910 20:01:59.408332       1 main.go:295] Handling node with IPs: map[172.31.210.34:{}]
	I0910 20:01:59.408343       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	I0910 20:01:59.408591       1 main.go:295] Handling node with IPs: map[172.31.214.220:{}]
	I0910 20:01:59.408763       1 main.go:322] Node multinode-629100-m03 has CIDR [10.244.2.0/24] 
	I0910 20:02:09.407718       1 main.go:295] Handling node with IPs: map[172.31.210.34:{}]
	I0910 20:02:09.407921       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	I0910 20:02:09.408193       1 main.go:295] Handling node with IPs: map[172.31.214.220:{}]
	I0910 20:02:09.408262       1 main.go:322] Node multinode-629100-m03 has CIDR [10.244.2.0/24] 
	I0910 20:02:09.408488       1 main.go:295] Handling node with IPs: map[172.31.215.172:{}]
	I0910 20:02:09.408545       1 main.go:299] handling current node
	I0910 20:02:19.412664       1 main.go:295] Handling node with IPs: map[172.31.215.172:{}]
	I0910 20:02:19.412816       1 main.go:299] handling current node
	I0910 20:02:19.412834       1 main.go:295] Handling node with IPs: map[172.31.210.34:{}]
	I0910 20:02:19.412842       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	I0910 20:02:19.412971       1 main.go:295] Handling node with IPs: map[172.31.214.220:{}]
	I0910 20:02:19.412994       1 main.go:322] Node multinode-629100-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [6c4b89f91c72] <==
	I0910 19:56:46.475771       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0910 19:56:46.475798       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0910 19:56:46.478887       1 aggregator.go:171] initial CRD sync complete...
	I0910 19:56:46.479090       1 autoregister_controller.go:144] Starting autoregister controller
	I0910 19:56:46.479270       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0910 19:56:46.480187       1 cache.go:39] Caches are synced for autoregister controller
	I0910 19:56:46.518495       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0910 19:56:46.540111       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0910 19:56:46.540210       1 policy_source.go:224] refreshing policies
	I0910 19:56:46.548245       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0910 19:56:46.550689       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0910 19:56:46.551091       1 shared_informer.go:320] Caches are synced for configmaps
	I0910 19:56:46.554884       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0910 19:56:46.555100       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0910 19:56:46.560946       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0910 19:56:46.569884       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0910 19:56:47.356019       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0910 19:56:47.771678       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.31.215.172]
	I0910 19:56:47.775512       1 controller.go:615] quota admission added evaluator for: endpoints
	I0910 19:56:47.791959       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0910 19:56:49.050278       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0910 19:56:49.226365       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0910 19:56:49.241490       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0910 19:56:49.348980       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0910 19:56:49.359102       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [1dc6a0b68f7b] <==
	I0910 19:59:52.545670       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="31.004µs"
	I0910 19:59:52.569915       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="272.038µs"
	I0910 19:59:52.732434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.307µs"
	I0910 19:59:52.737346       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="35.205µs"
	I0910 19:59:53.763781       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.484894ms"
	I0910 19:59:53.764137       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.208µs"
	I0910 20:01:36.231192       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:01:36.253589       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:01:40.989515       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-629100-m02"
	I0910 20:01:40.989849       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:01:46.552799       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-629100-m03\" does not exist"
	I0910 20:01:46.552860       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-629100-m02"
	I0910 20:01:46.567605       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-629100-m03" podCIDRs=["10.244.2.0/24"]
	I0910 20:01:46.567688       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:01:46.567708       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:01:46.590490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:01:46.732405       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:01:47.253334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:01:50.252496       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:01:56.654527       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:02:01.777048       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-629100-m02"
	I0910 20:02:01.777404       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:02:01.795533       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:02:03.670236       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100"
	I0910 20:02:05.157627       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	
	
	==> kube-controller-manager [ea7220d439d1] <==
	I0910 19:50:05.897148       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-629100-m02"
	I0910 19:50:05.899096       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:50:05.923857       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:50:11.040217       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:19.516846       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:19.534024       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:24.104286       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:24.104342       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-629100-m02"
	I0910 19:52:29.743528       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-629100-m02"
	I0910 19:52:29.744004       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-629100-m03\" does not exist"
	I0910 19:52:29.771781       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-629100-m03" podCIDRs=["10.244.3.0/24"]
	I0910 19:52:29.773227       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:29.773553       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:29.857215       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:30.388502       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:31.071748       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:39.931055       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:45.228950       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:45.229037       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-629100-m02"
	I0910 19:52:45.244148       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:45.992858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:54:11.025904       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:54:11.025988       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-629100-m02"
	I0910 19:54:11.054844       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:54:16.335919       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	
	
	==> kube-proxy [85b03f498671] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 19:35:47.926887       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 19:35:47.936949       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.31.210.71"]
	E0910 19:35:47.937088       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 19:35:47.985558       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 19:35:47.985667       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 19:35:47.985694       1 server_linux.go:169] "Using iptables Proxier"
	I0910 19:35:47.989351       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 19:35:47.989836       1 server.go:483] "Version info" version="v1.31.0"
	I0910 19:35:47.989943       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 19:35:47.992068       1 config.go:197] "Starting service config controller"
	I0910 19:35:47.994045       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 19:35:47.994294       1 config.go:326] "Starting node config controller"
	I0910 19:35:47.994439       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 19:35:47.993518       1 config.go:104] "Starting endpoint slice config controller"
	I0910 19:35:47.996484       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 19:35:48.095182       1 shared_informer.go:320] Caches are synced for service config
	I0910 19:35:48.095444       1 shared_informer.go:320] Caches are synced for node config
	I0910 19:35:48.097751       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [8df28a487cc9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 19:56:48.308685       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 19:56:48.377764       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.31.215.172"]
	E0910 19:56:48.377940       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 19:56:48.505658       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 19:56:48.505871       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 19:56:48.506111       1 server_linux.go:169] "Using iptables Proxier"
	I0910 19:56:48.512108       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 19:56:48.515499       1 server.go:483] "Version info" version="v1.31.0"
	I0910 19:56:48.515674       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 19:56:48.526692       1 config.go:197] "Starting service config controller"
	I0910 19:56:48.526841       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 19:56:48.527004       1 config.go:104] "Starting endpoint slice config controller"
	I0910 19:56:48.528318       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 19:56:48.548045       1 config.go:326] "Starting node config controller"
	I0910 19:56:48.548072       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 19:56:48.628236       1 shared_informer.go:320] Caches are synced for service config
	I0910 19:56:48.628522       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 19:56:48.650394       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5cb559fed2d8] <==
	E0910 19:35:38.864572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:38.875237       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0910 19:35:38.875432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:38.900948       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0910 19:35:38.900977       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:38.957305       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0910 19:35:38.957506       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0910 19:35:38.997653       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0910 19:35:38.997837       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:39.004298       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0910 19:35:39.004563       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:39.017869       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0910 19:35:39.017920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:39.089188       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 19:35:39.089469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:39.288341       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 19:35:39.288858       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:39.326675       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0910 19:35:39.327101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:39.349957       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0910 19:35:39.350170       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:39.392655       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0910 19:35:39.392930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0910 19:35:40.833153       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0910 19:54:30.585174       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c6849e798f8b] <==
	I0910 19:56:44.356001       1 serving.go:386] Generated self-signed cert in-memory
	W0910 19:56:46.395195       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0910 19:56:46.395391       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0910 19:56:46.395485       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0910 19:56:46.395664       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0910 19:56:46.485826       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0910 19:56:46.486041       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 19:56:46.492219       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0910 19:56:46.492218       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0910 19:56:46.496339       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0910 19:56:46.492305       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0910 19:56:46.597834       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 19:57:41 multinode-629100 kubelet[1621]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 19:57:41 multinode-629100 kubelet[1621]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 19:57:41 multinode-629100 kubelet[1621]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 19:57:41 multinode-629100 kubelet[1621]: I0910 19:57:41.722801    1621 scope.go:117] "RemoveContainer" containerID="ea8d0b0af86deb6ccb74eaa572c29aea279c27e0c505ae63ee7489d8c9754843"
	Sep 10 19:57:41 multinode-629100 kubelet[1621]: I0910 19:57:41.765675    1621 scope.go:117] "RemoveContainer" containerID="76702d5d897eb9bc9bdd58e3affbc3713a47b1053c1736e7ff804fc3e8263753"
	Sep 10 19:58:41 multinode-629100 kubelet[1621]: E0910 19:58:41.720028    1621 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 19:58:41 multinode-629100 kubelet[1621]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 19:58:41 multinode-629100 kubelet[1621]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 19:58:41 multinode-629100 kubelet[1621]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 19:58:41 multinode-629100 kubelet[1621]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 19:59:41 multinode-629100 kubelet[1621]: E0910 19:59:41.719702    1621 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 19:59:41 multinode-629100 kubelet[1621]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 19:59:41 multinode-629100 kubelet[1621]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 19:59:41 multinode-629100 kubelet[1621]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 19:59:41 multinode-629100 kubelet[1621]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 20:00:41 multinode-629100 kubelet[1621]: E0910 20:00:41.720916    1621 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 20:00:41 multinode-629100 kubelet[1621]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 20:00:41 multinode-629100 kubelet[1621]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 20:00:41 multinode-629100 kubelet[1621]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 20:00:41 multinode-629100 kubelet[1621]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 20:01:41 multinode-629100 kubelet[1621]: E0910 20:01:41.721889    1621 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 20:01:41 multinode-629100 kubelet[1621]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 20:01:41 multinode-629100 kubelet[1621]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 20:01:41 multinode-629100 kubelet[1621]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 20:01:41 multinode-629100 kubelet[1621]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-629100 -n multinode-629100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-629100 -n multinode-629100: (10.6608302s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-629100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (557.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (47.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-629100 node delete m03: exit status 1 (16.0675379s)

                                                
                                                
-- stdout --
	* Deleting node m03 from cluster multinode-629100

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-windows-amd64.exe -p multinode-629100 node delete m03": exit status 1
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-629100 status --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:424: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-629100 status --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-629100 -n multinode-629100
E0910 20:02:59.980761    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-629100 -n multinode-629100: (10.5159598s)
helpers_test.go:244: <<< TestMultiNode/serial/DeleteNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 logs -n 25: (7.9724258s)
helpers_test.go:252: TestMultiNode/serial/DeleteNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-629100 cp multinode-629100-m02:/home/docker/cp-test.txt                                                        | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:46 UTC | 10 Sep 24 19:46 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile1125635233\001\cp-test_multinode-629100-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n                                                                                                  | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:46 UTC | 10 Sep 24 19:46 UTC |
	|         | multinode-629100-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-629100 cp multinode-629100-m02:/home/docker/cp-test.txt                                                        | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:46 UTC | 10 Sep 24 19:46 UTC |
	|         | multinode-629100:/home/docker/cp-test_multinode-629100-m02_multinode-629100.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n                                                                                                  | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:46 UTC | 10 Sep 24 19:47 UTC |
	|         | multinode-629100-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n multinode-629100 sudo cat                                                                        | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:47 UTC | 10 Sep 24 19:47 UTC |
	|         | /home/docker/cp-test_multinode-629100-m02_multinode-629100.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-629100 cp multinode-629100-m02:/home/docker/cp-test.txt                                                        | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:47 UTC | 10 Sep 24 19:47 UTC |
	|         | multinode-629100-m03:/home/docker/cp-test_multinode-629100-m02_multinode-629100-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n                                                                                                  | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:47 UTC | 10 Sep 24 19:47 UTC |
	|         | multinode-629100-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n multinode-629100-m03 sudo cat                                                                    | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:47 UTC | 10 Sep 24 19:47 UTC |
	|         | /home/docker/cp-test_multinode-629100-m02_multinode-629100-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-629100 cp testdata\cp-test.txt                                                                                 | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:47 UTC | 10 Sep 24 19:47 UTC |
	|         | multinode-629100-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n                                                                                                  | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:47 UTC | 10 Sep 24 19:48 UTC |
	|         | multinode-629100-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-629100 cp multinode-629100-m03:/home/docker/cp-test.txt                                                        | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:48 UTC | 10 Sep 24 19:48 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile1125635233\001\cp-test_multinode-629100-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n                                                                                                  | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:48 UTC | 10 Sep 24 19:48 UTC |
	|         | multinode-629100-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-629100 cp multinode-629100-m03:/home/docker/cp-test.txt                                                        | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:48 UTC | 10 Sep 24 19:48 UTC |
	|         | multinode-629100:/home/docker/cp-test_multinode-629100-m03_multinode-629100.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n                                                                                                  | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:48 UTC | 10 Sep 24 19:48 UTC |
	|         | multinode-629100-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n multinode-629100 sudo cat                                                                        | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:48 UTC | 10 Sep 24 19:48 UTC |
	|         | /home/docker/cp-test_multinode-629100-m03_multinode-629100.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-629100 cp multinode-629100-m03:/home/docker/cp-test.txt                                                        | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:48 UTC | 10 Sep 24 19:49 UTC |
	|         | multinode-629100-m02:/home/docker/cp-test_multinode-629100-m03_multinode-629100-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n                                                                                                  | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:49 UTC | 10 Sep 24 19:49 UTC |
	|         | multinode-629100-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-629100 ssh -n multinode-629100-m02 sudo cat                                                                    | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:49 UTC | 10 Sep 24 19:49 UTC |
	|         | /home/docker/cp-test_multinode-629100-m03_multinode-629100-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-629100 node stop m03                                                                                           | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:49 UTC | 10 Sep 24 19:49 UTC |
	| node    | multinode-629100 node start                                                                                              | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:50 UTC | 10 Sep 24 19:52 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-629100                                                                                                 | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:53 UTC |                     |
	| stop    | -p multinode-629100                                                                                                      | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:53 UTC | 10 Sep 24 19:54 UTC |
	| start   | -p multinode-629100                                                                                                      | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 19:54 UTC | 10 Sep 24 20:02 UTC |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	| node    | list -p multinode-629100                                                                                                 | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 20:02 UTC |                     |
	| node    | multinode-629100 node delete                                                                                             | multinode-629100 | minikube5\jenkins | v1.34.0 | 10 Sep 24 20:02 UTC |                     |
	|         | m03                                                                                                                      |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 19:54:51
	Running on machine: minikube5
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 19:54:51.349718    8968 out.go:345] Setting OutFile to fd 532 ...
	I0910 19:54:51.393905    8968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 19:54:51.393905    8968 out.go:358] Setting ErrFile to fd 1028...
	I0910 19:54:51.393905    8968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 19:54:51.409268    8968 out.go:352] Setting JSON to false
	I0910 19:54:51.411441    8968 start.go:129] hostinfo: {"hostname":"minikube5","uptime":109354,"bootTime":1725888737,"procs":182,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4842 Build 19045.4842","kernelVersion":"10.0.19045.4842 Build 19045.4842","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0910 19:54:51.411441    8968 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 19:54:51.453238    8968 out.go:177] * [multinode-629100] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	I0910 19:54:51.644576    8968 notify.go:220] Checking for updates...
	I0910 19:54:51.650331    8968 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 19:54:51.714764    8968 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 19:54:51.727209    8968 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0910 19:54:51.757677    8968 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 19:54:51.771287    8968 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 19:54:51.780509    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:54:51.780509    8968 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 19:54:56.560698    8968 out.go:177] * Using the hyperv driver based on existing profile
	I0910 19:54:56.569262    8968 start.go:297] selected driver: hyperv
	I0910 19:54:56.569262    8968 start.go:901] validating driver "hyperv" against &{Name:multinode-629100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.210.71 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.209.0 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.210.110 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fa
lse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:54:56.569262    8968 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 19:54:56.617787    8968 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:54:56.617787    8968 cni.go:84] Creating CNI manager for ""
	I0910 19:54:56.617787    8968 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0910 19:54:56.618785    8968 start.go:340] cluster config:
	{Name:multinode-629100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-629100 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.210.71 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.209.0 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.210.110 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner
:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:54:56.618785    8968 iso.go:125] acquiring lock: {Name:mkb663b978f90cccd76348bddb3ea027f6ac4252 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 19:54:56.757103    8968 out.go:177] * Starting "multinode-629100" primary control-plane node in "multinode-629100" cluster
	I0910 19:54:56.798051    8968 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 19:54:56.798507    8968 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0910 19:54:56.798617    8968 cache.go:56] Caching tarball of preloaded images
	I0910 19:54:56.799037    8968 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0910 19:54:56.799494    8968 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 19:54:56.799494    8968 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json ...
	I0910 19:54:56.802428    8968 start.go:360] acquireMachinesLock for multinode-629100: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 19:54:56.802734    8968 start.go:364] duration metric: took 169.4µs to acquireMachinesLock for "multinode-629100"
	I0910 19:54:56.803023    8968 start.go:96] Skipping create...Using existing machine configuration
	I0910 19:54:56.803023    8968 fix.go:54] fixHost starting: 
	I0910 19:54:56.803786    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:54:59.113554    8968 main.go:141] libmachine: [stdout =====>] : Off
	
	I0910 19:54:59.113554    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:54:59.114584    8968 fix.go:112] recreateIfNeeded on multinode-629100: state=Stopped err=<nil>
	W0910 19:54:59.114637    8968 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 19:54:59.202325    8968 out.go:177] * Restarting existing hyperv VM for "multinode-629100" ...
	I0910 19:54:59.211615    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-629100
	I0910 19:55:02.248485    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:55:02.248574    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:02.248574    8968 main.go:141] libmachine: Waiting for host to start...
	I0910 19:55:02.248641    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:04.191552    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:04.191552    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:04.191552    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:06.358788    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:55:06.358980    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:07.362604    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:09.251843    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:09.252249    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:09.252249    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:11.454004    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:55:11.454158    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:12.459240    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:14.390705    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:14.390705    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:14.390705    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:16.600240    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:55:16.600240    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:17.603127    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:19.541334    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:19.541848    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:19.541969    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:21.799849    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:55:21.799849    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:22.812796    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:24.768148    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:24.768148    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:24.768148    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:27.088196    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:55:27.088196    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:27.091273    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:28.970540    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:28.970540    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:28.971322    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:31.202478    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:55:31.202478    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:31.203629    8968 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json ...
	I0910 19:55:31.205918    8968 machine.go:93] provisionDockerMachine start ...
	I0910 19:55:31.206095    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:33.017145    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:33.017145    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:33.017759    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:35.222003    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:55:35.222257    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:35.226240    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:55:35.226873    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.172 22 <nil> <nil>}
	I0910 19:55:35.226873    8968 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 19:55:35.356834    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 19:55:35.356914    8968 buildroot.go:166] provisioning hostname "multinode-629100"
	I0910 19:55:35.357030    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:37.157319    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:37.157319    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:37.157992    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:39.326498    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:55:39.327278    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:39.330241    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:55:39.330819    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.172 22 <nil> <nil>}
	I0910 19:55:39.330819    8968 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-629100 && echo "multinode-629100" | sudo tee /etc/hostname
	I0910 19:55:39.470925    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-629100
	
	I0910 19:55:39.471025    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:41.311801    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:41.311801    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:41.312078    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:43.518799    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:55:43.518799    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:43.522656    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:55:43.523330    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.172 22 <nil> <nil>}
	I0910 19:55:43.523330    8968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-629100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-629100/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-629100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 19:55:43.666796    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 19:55:43.667002    8968 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0910 19:55:43.667002    8968 buildroot.go:174] setting up certificates
	I0910 19:55:43.667002    8968 provision.go:84] configureAuth start
	I0910 19:55:43.667175    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:45.490903    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:45.490903    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:45.491280    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:47.672838    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:55:47.673669    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:47.673669    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:49.478125    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:49.478125    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:49.478125    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:51.676201    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:55:51.676201    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:51.676869    8968 provision.go:143] copyHostCerts
	I0910 19:55:51.676948    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0910 19:55:51.677197    8968 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0910 19:55:51.677197    8968 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0910 19:55:51.677515    8968 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0910 19:55:51.678350    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0910 19:55:51.678475    8968 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0910 19:55:51.678475    8968 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0910 19:55:51.678475    8968 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0910 19:55:51.679245    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0910 19:55:51.679245    8968 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0910 19:55:51.679778    8968 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0910 19:55:51.680005    8968 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0910 19:55:51.680594    8968 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-629100 san=[127.0.0.1 172.31.215.172 localhost minikube multinode-629100]
	I0910 19:55:51.940690    8968 provision.go:177] copyRemoteCerts
	I0910 19:55:51.949934    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 19:55:51.949934    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:53.768315    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:53.768315    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:53.768865    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:55:55.948738    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:55:55.948738    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:55.949913    8968 sshutil.go:53] new ssh client: &{IP:172.31.215.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 19:55:56.060981    8968 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1107756s)
	I0910 19:55:56.061083    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0910 19:55:56.061204    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 19:55:56.102074    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0910 19:55:56.102162    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0910 19:55:56.141093    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0910 19:55:56.142048    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 19:55:56.186775    8968 provision.go:87] duration metric: took 12.5189453s to configureAuth
	I0910 19:55:56.186909    8968 buildroot.go:189] setting minikube options for container-runtime
	I0910 19:55:56.187719    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:55:56.187871    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:55:58.006146    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:55:58.006146    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:55:58.006457    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:56:00.204140    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:56:00.204140    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:00.208874    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:56:00.209038    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.172 22 <nil> <nil>}
	I0910 19:56:00.209038    8968 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 19:56:00.343982    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 19:56:00.344029    8968 buildroot.go:70] root file system type: tmpfs
	I0910 19:56:00.344103    8968 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 19:56:00.344103    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:56:02.241359    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:56:02.241898    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:02.241898    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:56:04.484048    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:56:04.484048    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:04.488459    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:56:04.488939    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.172 22 <nil> <nil>}
	I0910 19:56:04.488939    8968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 19:56:04.647131    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 19:56:04.647238    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:56:06.532276    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:56:06.532276    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:06.532276    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:56:08.757503    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:56:08.757503    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:08.762888    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:56:08.763675    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.172 22 <nil> <nil>}
	I0910 19:56:08.763745    8968 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 19:56:11.168441    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 19:56:11.168550    8968 machine.go:96] duration metric: took 39.9599043s to provisionDockerMachine
	I0910 19:56:11.168600    8968 start.go:293] postStartSetup for "multinode-629100" (driver="hyperv")
	I0910 19:56:11.168600    8968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 19:56:11.176637    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 19:56:11.176637    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:56:13.016876    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:56:13.016876    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:13.017587    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:56:15.249047    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:56:15.249047    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:15.250101    8968 sshutil.go:53] new ssh client: &{IP:172.31.215.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 19:56:15.350483    8968 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.1735704s)
	I0910 19:56:15.358673    8968 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 19:56:15.365836    8968 command_runner.go:130] > NAME=Buildroot
	I0910 19:56:15.365943    8968 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0910 19:56:15.365943    8968 command_runner.go:130] > ID=buildroot
	I0910 19:56:15.365943    8968 command_runner.go:130] > VERSION_ID=2023.02.9
	I0910 19:56:15.365943    8968 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0910 19:56:15.366076    8968 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 19:56:15.366140    8968 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0910 19:56:15.366465    8968 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0910 19:56:15.366897    8968 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> 47242.pem in /etc/ssl/certs
	I0910 19:56:15.366897    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /etc/ssl/certs/47242.pem
	I0910 19:56:15.375577    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 19:56:15.392551    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /etc/ssl/certs/47242.pem (1708 bytes)
	I0910 19:56:15.445284    8968 start.go:296] duration metric: took 4.2764022s for postStartSetup
	I0910 19:56:15.445284    8968 fix.go:56] duration metric: took 1m18.6370603s for fixHost
	I0910 19:56:15.445284    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:56:17.260547    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:56:17.260547    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:17.261046    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:56:19.525121    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:56:19.525121    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:19.529821    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:56:19.530372    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.172 22 <nil> <nil>}
	I0910 19:56:19.530455    8968 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 19:56:19.657935    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725998179.875545666
	
	I0910 19:56:19.657935    8968 fix.go:216] guest clock: 1725998179.875545666
	I0910 19:56:19.657935    8968 fix.go:229] Guest: 2024-09-10 19:56:19.875545666 +0000 UTC Remote: 2024-09-10 19:56:15.4452848 +0000 UTC m=+84.164019901 (delta=4.430260866s)
	I0910 19:56:19.658021    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:56:21.569597    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:56:21.569597    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:21.570477    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:56:23.827065    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:56:23.827065    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:23.832337    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:56:23.833025    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.215.172 22 <nil> <nil>}
	I0910 19:56:23.833025    8968 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1725998179
	I0910 19:56:23.966594    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Sep 10 19:56:19 UTC 2024
	
	I0910 19:56:23.966674    8968 fix.go:236] clock set: Tue Sep 10 19:56:19 UTC 2024
	 (err=<nil>)
	I0910 19:56:23.966674    8968 start.go:83] releasing machines lock for "multinode-629100", held for 1m27.1581778s
	I0910 19:56:23.966940    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:56:25.805789    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:56:25.805789    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:25.806273    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:56:28.057139    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:56:28.057139    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:28.061931    8968 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0910 19:56:28.062021    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:56:28.070593    8968 ssh_runner.go:195] Run: cat /version.json
	I0910 19:56:28.070593    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:56:29.995916    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:56:29.995916    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:30.001879    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:56:30.006636    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:56:30.006636    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:30.006636    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:56:32.328310    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:56:32.329469    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:32.329546    8968 sshutil.go:53] new ssh client: &{IP:172.31.215.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 19:56:32.351074    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:56:32.351074    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:56:32.351074    8968 sshutil.go:53] new ssh client: &{IP:172.31.215.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 19:56:32.422719    8968 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0910 19:56:32.422799    8968 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.3605809s)
	W0910 19:56:32.422920    8968 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0910 19:56:32.439045    8968 command_runner.go:130] > {"iso_version": "v1.34.0-1725912912-19598", "kicbase_version": "v0.0.45", "minikube_version": "v1.34.0", "commit": "a47e98bacf93197560d0f08408949de0434951d5"}
	I0910 19:56:32.439045    8968 ssh_runner.go:235] Completed: cat /version.json: (4.3681648s)
	I0910 19:56:32.449121    8968 ssh_runner.go:195] Run: systemctl --version
	I0910 19:56:32.458656    8968 command_runner.go:130] > systemd 252 (252)
	I0910 19:56:32.458656    8968 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0910 19:56:32.467038    8968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0910 19:56:32.474641    8968 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0910 19:56:32.475633    8968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 19:56:32.484219    8968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 19:56:32.509754    8968 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0910 19:56:32.510072    8968 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 19:56:32.510072    8968 start.go:495] detecting cgroup driver to use...
	I0910 19:56:32.510345    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 19:56:32.551712    8968 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	W0910 19:56:32.561610    8968 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0910 19:56:32.561717    8968 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0910 19:56:32.564699    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 19:56:32.594495    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 19:56:32.612335    8968 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 19:56:32.625083    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 19:56:32.650676    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 19:56:32.676749    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 19:56:32.703464    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 19:56:32.730334    8968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 19:56:32.759064    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 19:56:32.784719    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 19:56:32.813565    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 19:56:32.839376    8968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 19:56:32.861766    8968 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0910 19:56:32.872274    8968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 19:56:32.901215    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:56:33.068127    8968 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 19:56:33.095453    8968 start.go:495] detecting cgroup driver to use...
	I0910 19:56:33.104067    8968 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 19:56:33.126431    8968 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0910 19:56:33.126462    8968 command_runner.go:130] > [Unit]
	I0910 19:56:33.126462    8968 command_runner.go:130] > Description=Docker Application Container Engine
	I0910 19:56:33.126462    8968 command_runner.go:130] > Documentation=https://docs.docker.com
	I0910 19:56:33.126462    8968 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0910 19:56:33.126572    8968 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0910 19:56:33.126572    8968 command_runner.go:130] > StartLimitBurst=3
	I0910 19:56:33.126610    8968 command_runner.go:130] > StartLimitIntervalSec=60
	I0910 19:56:33.126610    8968 command_runner.go:130] > [Service]
	I0910 19:56:33.126610    8968 command_runner.go:130] > Type=notify
	I0910 19:56:33.126658    8968 command_runner.go:130] > Restart=on-failure
	I0910 19:56:33.126697    8968 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0910 19:56:33.126697    8968 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0910 19:56:33.126743    8968 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0910 19:56:33.126782    8968 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0910 19:56:33.126782    8968 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0910 19:56:33.126823    8968 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0910 19:56:33.126861    8968 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0910 19:56:33.126861    8968 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0910 19:56:33.126909    8968 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0910 19:56:33.126947    8968 command_runner.go:130] > ExecStart=
	I0910 19:56:33.126993    8968 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0910 19:56:33.127031    8968 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0910 19:56:33.127078    8968 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0910 19:56:33.127116    8968 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0910 19:56:33.127116    8968 command_runner.go:130] > LimitNOFILE=infinity
	I0910 19:56:33.127163    8968 command_runner.go:130] > LimitNPROC=infinity
	I0910 19:56:33.127163    8968 command_runner.go:130] > LimitCORE=infinity
	I0910 19:56:33.127202    8968 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0910 19:56:33.127202    8968 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0910 19:56:33.127247    8968 command_runner.go:130] > TasksMax=infinity
	I0910 19:56:33.127247    8968 command_runner.go:130] > TimeoutStartSec=0
	I0910 19:56:33.127287    8968 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0910 19:56:33.127287    8968 command_runner.go:130] > Delegate=yes
	I0910 19:56:33.127332    8968 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0910 19:56:33.127332    8968 command_runner.go:130] > KillMode=process
	I0910 19:56:33.127371    8968 command_runner.go:130] > [Install]
	I0910 19:56:33.127371    8968 command_runner.go:130] > WantedBy=multi-user.target
	I0910 19:56:33.136772    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:56:33.167671    8968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 19:56:33.198610    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:56:33.229167    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 19:56:33.259175    8968 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 19:56:33.316352    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 19:56:33.338341    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 19:56:33.368711    8968 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0910 19:56:33.377603    8968 ssh_runner.go:195] Run: which cri-dockerd
	I0910 19:56:33.382637    8968 command_runner.go:130] > /usr/bin/cri-dockerd
	I0910 19:56:33.390594    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 19:56:33.406669    8968 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 19:56:33.441617    8968 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 19:56:33.628107    8968 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 19:56:33.791205    8968 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 19:56:33.791377    8968 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 19:56:33.835239    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:56:34.010288    8968 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 19:56:36.641133    8968 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.630672s)
	I0910 19:56:36.649244    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 19:56:36.679771    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 19:56:36.709773    8968 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 19:56:36.884369    8968 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 19:56:37.045145    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:56:37.213939    8968 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 19:56:37.250747    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 19:56:37.284206    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:56:37.450657    8968 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 19:56:37.542232    8968 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 19:56:37.556832    8968 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 19:56:37.564808    8968 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0910 19:56:37.564808    8968 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0910 19:56:37.564808    8968 command_runner.go:130] > Device: 0,22	Inode: 846         Links: 1
	I0910 19:56:37.564808    8968 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0910 19:56:37.564808    8968 command_runner.go:130] > Access: 2024-09-10 19:56:37.695183981 +0000
	I0910 19:56:37.564808    8968 command_runner.go:130] > Modify: 2024-09-10 19:56:37.695183981 +0000
	I0910 19:56:37.564808    8968 command_runner.go:130] > Change: 2024-09-10 19:56:37.700184717 +0000
	I0910 19:56:37.564808    8968 command_runner.go:130] >  Birth: -
	I0910 19:56:37.564808    8968 start.go:563] Will wait 60s for crictl version
	I0910 19:56:37.572799    8968 ssh_runner.go:195] Run: which crictl
	I0910 19:56:37.578543    8968 command_runner.go:130] > /usr/bin/crictl
	I0910 19:56:37.586389    8968 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 19:56:37.633980    8968 command_runner.go:130] > Version:  0.1.0
	I0910 19:56:37.633980    8968 command_runner.go:130] > RuntimeName:  docker
	I0910 19:56:37.633980    8968 command_runner.go:130] > RuntimeVersion:  27.2.0
	I0910 19:56:37.633980    8968 command_runner.go:130] > RuntimeApiVersion:  v1
	I0910 19:56:37.633980    8968 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0910 19:56:37.641240    8968 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 19:56:37.672602    8968 command_runner.go:130] > 27.2.0
	I0910 19:56:37.681777    8968 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 19:56:37.708014    8968 command_runner.go:130] > 27.2.0
	I0910 19:56:37.711076    8968 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0910 19:56:37.711398    8968 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0910 19:56:37.718180    8968 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0910 19:56:37.718247    8968 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0910 19:56:37.718247    8968 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0910 19:56:37.718247    8968 ip.go:211] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:36:e6 Flags:up|broadcast|multicast|running}
	I0910 19:56:37.720538    8968 ip.go:214] interface addr: fe80::bc85:cd58:cadb:2533/64
	I0910 19:56:37.720538    8968 ip.go:214] interface addr: 172.31.208.1/20
	I0910 19:56:37.728892    8968 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0910 19:56:37.734642    8968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:56:37.753261    8968 kubeadm.go:883] updating cluster {Name:multinode-629100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.215.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.209.0 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.210.110 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 19:56:37.753780    8968 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 19:56:37.761890    8968 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 19:56:37.788312    8968 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0910 19:56:37.788411    8968 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 19:56:37.788411    8968 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.0
	I0910 19:56:37.788411    8968 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.0
	I0910 19:56:37.788411    8968 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.0
	I0910 19:56:37.788479    8968 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0910 19:56:37.788479    8968 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0910 19:56:37.788479    8968 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0910 19:56:37.788479    8968 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 19:56:37.788479    8968 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0910 19:56:37.788479    8968 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0910 19:56:37.788479    8968 docker.go:615] Images already preloaded, skipping extraction
	I0910 19:56:37.795589    8968 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 19:56:37.816540    8968 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0910 19:56:37.816540    8968 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.0
	I0910 19:56:37.816540    8968 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 19:56:37.816540    8968 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.0
	I0910 19:56:37.816540    8968 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.0
	I0910 19:56:37.816540    8968 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0910 19:56:37.816540    8968 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0910 19:56:37.816540    8968 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0910 19:56:37.816540    8968 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 19:56:37.816540    8968 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0910 19:56:37.816540    8968 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0910 19:56:37.816540    8968 cache_images.go:84] Images are preloaded, skipping loading
	I0910 19:56:37.816540    8968 kubeadm.go:934] updating node { 172.31.215.172 8443 v1.31.0 docker true true} ...
	I0910 19:56:37.817541    8968 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-629100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.215.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 19:56:37.824538    8968 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0910 19:56:37.888144    8968 command_runner.go:130] > cgroupfs
	I0910 19:56:37.888144    8968 cni.go:84] Creating CNI manager for ""
	I0910 19:56:37.888144    8968 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0910 19:56:37.888144    8968 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 19:56:37.888144    8968 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.31.215.172 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-629100 NodeName:multinode-629100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.31.215.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.31.215.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 19:56:37.888144    8968 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.31.215.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-629100"
	  kubeletExtraArgs:
	    node-ip: 172.31.215.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.31.215.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 19:56:37.896670    8968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 19:56:37.913680    8968 command_runner.go:130] > kubeadm
	I0910 19:56:37.913739    8968 command_runner.go:130] > kubectl
	I0910 19:56:37.913739    8968 command_runner.go:130] > kubelet
	I0910 19:56:37.913739    8968 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 19:56:37.925790    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 19:56:37.940264    8968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0910 19:56:37.968391    8968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 19:56:37.996814    8968 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0910 19:56:38.039321    8968 ssh_runner.go:195] Run: grep 172.31.215.172	control-plane.minikube.internal$ /etc/hosts
	I0910 19:56:38.045004    8968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.215.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:56:38.075801    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:56:38.234804    8968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:56:38.261215    8968 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100 for IP: 172.31.215.172
	I0910 19:56:38.261289    8968 certs.go:194] generating shared ca certs ...
	I0910 19:56:38.261289    8968 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:56:38.262036    8968 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0910 19:56:38.262175    8968 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0910 19:56:38.262523    8968 certs.go:256] generating profile certs ...
	I0910 19:56:38.263476    8968 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\client.key
	I0910 19:56:38.263704    8968 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.key.19ae5440
	I0910 19:56:38.263899    8968 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.crt.19ae5440 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.215.172]
	I0910 19:56:38.352070    8968 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.crt.19ae5440 ...
	I0910 19:56:38.353069    8968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.crt.19ae5440: {Name:mka4aa739e1e31d272e3a0c83d71990004ea368f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:56:38.353318    8968 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.key.19ae5440 ...
	I0910 19:56:38.353318    8968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.key.19ae5440: {Name:mk0f0b1f0e62f4cc00cc755cf935f1f4f74aa76a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:56:38.354397    8968 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.crt.19ae5440 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.crt
	I0910 19:56:38.368314    8968 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.key.19ae5440 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.key
	I0910 19:56:38.370576    8968 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\proxy-client.key
	I0910 19:56:38.370576    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 19:56:38.370912    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0910 19:56:38.370912    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 19:56:38.370912    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 19:56:38.371455    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 19:56:38.371708    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 19:56:38.371972    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 19:56:38.372703    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 19:56:38.373212    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem (1338 bytes)
	W0910 19:56:38.373381    8968 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724_empty.pem, impossibly tiny 0 bytes
	I0910 19:56:38.373740    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0910 19:56:38.373997    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0910 19:56:38.374190    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0910 19:56:38.374190    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0910 19:56:38.374859    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem (1708 bytes)
	I0910 19:56:38.374859    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:56:38.374859    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem -> /usr/share/ca-certificates/4724.pem
	I0910 19:56:38.374859    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /usr/share/ca-certificates/47242.pem
	I0910 19:56:38.376241    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 19:56:38.421258    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0910 19:56:38.464846    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 19:56:38.501694    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 19:56:38.546382    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0910 19:56:38.586301    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 19:56:38.625231    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 19:56:38.665770    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 19:56:38.704754    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 19:56:38.743619    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem --> /usr/share/ca-certificates/4724.pem (1338 bytes)
	I0910 19:56:38.783193    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /usr/share/ca-certificates/47242.pem (1708 bytes)
	I0910 19:56:38.821090    8968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 19:56:38.856080    8968 ssh_runner.go:195] Run: openssl version
	I0910 19:56:38.865125    8968 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0910 19:56:38.877220    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 19:56:38.903309    8968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:56:38.908944    8968 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:56:38.909848    8968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:56:38.917705    8968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:56:38.925186    8968 command_runner.go:130] > b5213941
	I0910 19:56:38.934489    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 19:56:38.957691    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4724.pem && ln -fs /usr/share/ca-certificates/4724.pem /etc/ssl/certs/4724.pem"
	I0910 19:56:38.986436    8968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4724.pem
	I0910 19:56:38.992640    8968 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 19:56:38.992640    8968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 19:56:39.003182    8968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4724.pem
	I0910 19:56:39.011660    8968 command_runner.go:130] > 51391683
	I0910 19:56:39.019826    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4724.pem /etc/ssl/certs/51391683.0"
	I0910 19:56:39.044940    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47242.pem && ln -fs /usr/share/ca-certificates/47242.pem /etc/ssl/certs/47242.pem"
	I0910 19:56:39.074742    8968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47242.pem
	I0910 19:56:39.081680    8968 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 19:56:39.081680    8968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 19:56:39.092804    8968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47242.pem
	I0910 19:56:39.100846    8968 command_runner.go:130] > 3ec20f2e
	I0910 19:56:39.109625    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47242.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 19:56:39.136173    8968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 19:56:39.143744    8968 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 19:56:39.143744    8968 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0910 19:56:39.143744    8968 command_runner.go:130] > Device: 8,1	Inode: 5242685     Links: 1
	I0910 19:56:39.143744    8968 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0910 19:56:39.143744    8968 command_runner.go:130] > Access: 2024-09-10 19:35:28.653571928 +0000
	I0910 19:56:39.143744    8968 command_runner.go:130] > Modify: 2024-09-10 19:35:28.653571928 +0000
	I0910 19:56:39.143744    8968 command_runner.go:130] > Change: 2024-09-10 19:35:28.653571928 +0000
	I0910 19:56:39.143863    8968 command_runner.go:130] >  Birth: 2024-09-10 19:35:28.653571928 +0000
	I0910 19:56:39.151948    8968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 19:56:39.161535    8968 command_runner.go:130] > Certificate will not expire
	I0910 19:56:39.171433    8968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 19:56:39.180962    8968 command_runner.go:130] > Certificate will not expire
	I0910 19:56:39.190125    8968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 19:56:39.200301    8968 command_runner.go:130] > Certificate will not expire
	I0910 19:56:39.207469    8968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 19:56:39.216732    8968 command_runner.go:130] > Certificate will not expire
	I0910 19:56:39.225321    8968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 19:56:39.234561    8968 command_runner.go:130] > Certificate will not expire
	I0910 19:56:39.242315    8968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 19:56:39.251876    8968 command_runner.go:130] > Certificate will not expire
	I0910 19:56:39.252160    8968 kubeadm.go:392] StartCluster: {Name:multinode-629100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.215.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.209.0 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.210.110 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:56:39.258172    8968 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 19:56:39.290840    8968 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 19:56:39.312255    8968 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0910 19:56:39.312255    8968 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0910 19:56:39.312255    8968 command_runner.go:130] > /var/lib/minikube/etcd:
	I0910 19:56:39.312255    8968 command_runner.go:130] > member
	I0910 19:56:39.312848    8968 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 19:56:39.312891    8968 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 19:56:39.323278    8968 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 19:56:39.338626    8968 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 19:56:39.339755    8968 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-629100" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 19:56:39.340650    8968 kubeconfig.go:62] C:\Users\jenkins.minikube5\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-629100" cluster setting kubeconfig missing "multinode-629100" context setting]
	I0910 19:56:39.341091    8968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:56:39.356996    8968 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 19:56:39.357446    8968 kapi.go:59] client config for multinode-629100: &rest.Config{Host:"https://172.31.215.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100/client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100/client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 19:56:39.358433    8968 cert_rotation.go:140] Starting client certificate rotation controller
	I0910 19:56:39.366456    8968 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 19:56:39.382610    8968 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0910 19:56:39.382610    8968 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0910 19:56:39.382696    8968 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0910 19:56:39.382696    8968 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0910 19:56:39.382696    8968 command_runner.go:130] >  kind: InitConfiguration
	I0910 19:56:39.382696    8968 command_runner.go:130] >  localAPIEndpoint:
	I0910 19:56:39.382696    8968 command_runner.go:130] > -  advertiseAddress: 172.31.210.71
	I0910 19:56:39.382696    8968 command_runner.go:130] > +  advertiseAddress: 172.31.215.172
	I0910 19:56:39.382696    8968 command_runner.go:130] >    bindPort: 8443
	I0910 19:56:39.382696    8968 command_runner.go:130] >  bootstrapTokens:
	I0910 19:56:39.382696    8968 command_runner.go:130] >    - groups:
	I0910 19:56:39.382696    8968 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0910 19:56:39.382769    8968 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0910 19:56:39.382769    8968 command_runner.go:130] >    name: "multinode-629100"
	I0910 19:56:39.382769    8968 command_runner.go:130] >    kubeletExtraArgs:
	I0910 19:56:39.382769    8968 command_runner.go:130] > -    node-ip: 172.31.210.71
	I0910 19:56:39.382769    8968 command_runner.go:130] > +    node-ip: 172.31.215.172
	I0910 19:56:39.382769    8968 command_runner.go:130] >    taints: []
	I0910 19:56:39.382769    8968 command_runner.go:130] >  ---
	I0910 19:56:39.382769    8968 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0910 19:56:39.382769    8968 command_runner.go:130] >  kind: ClusterConfiguration
	I0910 19:56:39.382875    8968 command_runner.go:130] >  apiServer:
	I0910 19:56:39.382875    8968 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.31.210.71"]
	I0910 19:56:39.382875    8968 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.31.215.172"]
	I0910 19:56:39.382875    8968 command_runner.go:130] >    extraArgs:
	I0910 19:56:39.382955    8968 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0910 19:56:39.382955    8968 command_runner.go:130] >  controllerManager:
	I0910 19:56:39.383034    8968 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.31.210.71
	+  advertiseAddress: 172.31.215.172
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-629100"
	   kubeletExtraArgs:
	-    node-ip: 172.31.210.71
	+    node-ip: 172.31.215.172
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.31.210.71"]
	+  certSANs: ["127.0.0.1", "localhost", "172.31.215.172"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0910 19:56:39.383034    8968 kubeadm.go:1160] stopping kube-system containers ...
	I0910 19:56:39.388935    8968 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 19:56:39.417916    8968 command_runner.go:130] > 039fd49f157a
	I0910 19:56:39.417916    8968 command_runner.go:130] > 35f4bfd5434b
	I0910 19:56:39.417916    8968 command_runner.go:130] > 3a4b56ccc379
	I0910 19:56:39.417916    8968 command_runner.go:130] > bf116f91589f
	I0910 19:56:39.417916    8968 command_runner.go:130] > 33f88ed7aee2
	I0910 19:56:39.417916    8968 command_runner.go:130] > 85b03f498671
	I0910 19:56:39.417916    8968 command_runner.go:130] > 1d92603202b0
	I0910 19:56:39.417916    8968 command_runner.go:130] > 4e550827f00f
	I0910 19:56:39.417916    8968 command_runner.go:130] > 76702d5d897e
	I0910 19:56:39.417916    8968 command_runner.go:130] > ea8d0b0af86d
	I0910 19:56:39.417916    8968 command_runner.go:130] > 5cb559fed2d8
	I0910 19:56:39.417916    8968 command_runner.go:130] > ea7220d439d1
	I0910 19:56:39.417916    8968 command_runner.go:130] > d3ab7c79ce4b
	I0910 19:56:39.417916    8968 command_runner.go:130] > db7037ca07a4
	I0910 19:56:39.417916    8968 command_runner.go:130] > 49d9c6949234
	I0910 19:56:39.417916    8968 command_runner.go:130] > f85e1c01f68d
	I0910 19:56:39.423100    8968 docker.go:483] Stopping containers: [039fd49f157a 35f4bfd5434b 3a4b56ccc379 bf116f91589f 33f88ed7aee2 85b03f498671 1d92603202b0 4e550827f00f 76702d5d897e ea8d0b0af86d 5cb559fed2d8 ea7220d439d1 d3ab7c79ce4b db7037ca07a4 49d9c6949234 f85e1c01f68d]
	I0910 19:56:39.429457    8968 ssh_runner.go:195] Run: docker stop 039fd49f157a 35f4bfd5434b 3a4b56ccc379 bf116f91589f 33f88ed7aee2 85b03f498671 1d92603202b0 4e550827f00f 76702d5d897e ea8d0b0af86d 5cb559fed2d8 ea7220d439d1 d3ab7c79ce4b db7037ca07a4 49d9c6949234 f85e1c01f68d
	I0910 19:56:39.452206    8968 command_runner.go:130] > 039fd49f157a
	I0910 19:56:39.452206    8968 command_runner.go:130] > 35f4bfd5434b
	I0910 19:56:39.452206    8968 command_runner.go:130] > 3a4b56ccc379
	I0910 19:56:39.452206    8968 command_runner.go:130] > bf116f91589f
	I0910 19:56:39.452206    8968 command_runner.go:130] > 33f88ed7aee2
	I0910 19:56:39.452206    8968 command_runner.go:130] > 85b03f498671
	I0910 19:56:39.452206    8968 command_runner.go:130] > 1d92603202b0
	I0910 19:56:39.453280    8968 command_runner.go:130] > 4e550827f00f
	I0910 19:56:39.453280    8968 command_runner.go:130] > 76702d5d897e
	I0910 19:56:39.453280    8968 command_runner.go:130] > ea8d0b0af86d
	I0910 19:56:39.453362    8968 command_runner.go:130] > 5cb559fed2d8
	I0910 19:56:39.453382    8968 command_runner.go:130] > ea7220d439d1
	I0910 19:56:39.453382    8968 command_runner.go:130] > d3ab7c79ce4b
	I0910 19:56:39.453382    8968 command_runner.go:130] > db7037ca07a4
	I0910 19:56:39.453382    8968 command_runner.go:130] > 49d9c6949234
	I0910 19:56:39.453382    8968 command_runner.go:130] > f85e1c01f68d
	I0910 19:56:39.466190    8968 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 19:56:39.501405    8968 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:56:39.517879    8968 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0910 19:56:39.517952    8968 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0910 19:56:39.517952    8968 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0910 19:56:39.517952    8968 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:56:39.518698    8968 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:56:39.518780    8968 kubeadm.go:157] found existing configuration files:
	
	I0910 19:56:39.528917    8968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:56:39.545046    8968 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:56:39.545046    8968 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:56:39.558329    8968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:56:39.587248    8968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:56:39.604180    8968 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:56:39.604357    8968 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:56:39.612388    8968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:56:39.640357    8968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:56:39.656666    8968 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:56:39.656666    8968 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:56:39.668563    8968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:56:39.697326    8968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:56:39.713909    8968 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:56:39.714260    8968 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:56:39.724205    8968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:56:39.748711    8968 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:56:39.764102    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:56:39.972487    8968 command_runner.go:130] ! W0910 19:56:40.198008    1589 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:39.972818    8968 command_runner.go:130] ! W0910 19:56:40.199067    1589 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:56:39.983160    8968 command_runner.go:130] > [certs] Using the existing "sa" key
	I0910 19:56:39.983160    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:56:40.040403    8968 command_runner.go:130] ! W0910 19:56:40.266499    1594 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:40.041482    8968 command_runner.go:130] ! W0910 19:56:40.267380    1594 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:40.940268    8968 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:56:40.940268    8968 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:56:40.940268    8968 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 19:56:40.940268    8968 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:56:40.940268    8968 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:56:40.940268    8968 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:56:40.941310    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:56:41.001988    8968 command_runner.go:130] ! W0910 19:56:41.227831    1599 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:41.002223    8968 command_runner.go:130] ! W0910 19:56:41.228835    1599 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:41.216732    8968 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:56:41.216732    8968 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:56:41.216732    8968 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0910 19:56:41.216991    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:56:41.275754    8968 command_runner.go:130] ! W0910 19:56:41.501519    1627 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:41.276450    8968 command_runner.go:130] ! W0910 19:56:41.502556    1627 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:41.288826    8968 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:56:41.288889    8968 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:56:41.288889    8968 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:56:41.288967    8968 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:56:41.289052    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:56:41.360679    8968 command_runner.go:130] ! W0910 19:56:41.586136    1632 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:41.361321    8968 command_runner.go:130] ! W0910 19:56:41.587528    1632 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:41.372319    8968 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:56:41.373305    8968 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:56:41.381296    8968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:56:41.886855    8968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:56:42.391668    8968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:56:42.887719    8968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:56:43.396225    8968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:56:43.419238    8968 command_runner.go:130] > 1954
	I0910 19:56:43.420241    8968 api_server.go:72] duration metric: took 2.0466435s to wait for apiserver process to appear ...
	I0910 19:56:43.420241    8968 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:56:43.420316    8968 api_server.go:253] Checking apiserver healthz at https://172.31.215.172:8443/healthz ...
	I0910 19:56:46.157624    8968 api_server.go:279] https://172.31.215.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 19:56:46.157624    8968 api_server.go:103] status: https://172.31.215.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 19:56:46.157624    8968 api_server.go:253] Checking apiserver healthz at https://172.31.215.172:8443/healthz ...
	I0910 19:56:46.230611    8968 api_server.go:279] https://172.31.215.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 19:56:46.230611    8968 api_server.go:103] status: https://172.31.215.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 19:56:46.433226    8968 api_server.go:253] Checking apiserver healthz at https://172.31.215.172:8443/healthz ...
	I0910 19:56:46.441472    8968 api_server.go:279] https://172.31.215.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 19:56:46.441960    8968 api_server.go:103] status: https://172.31.215.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 19:56:46.925693    8968 api_server.go:253] Checking apiserver healthz at https://172.31.215.172:8443/healthz ...
	I0910 19:56:46.932716    8968 api_server.go:279] https://172.31.215.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 19:56:46.933511    8968 api_server.go:103] status: https://172.31.215.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 19:56:47.435691    8968 api_server.go:253] Checking apiserver healthz at https://172.31.215.172:8443/healthz ...
	I0910 19:56:47.444457    8968 api_server.go:279] https://172.31.215.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 19:56:47.444539    8968 api_server.go:103] status: https://172.31.215.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 19:56:47.929837    8968 api_server.go:253] Checking apiserver healthz at https://172.31.215.172:8443/healthz ...
	I0910 19:56:47.937304    8968 api_server.go:279] https://172.31.215.172:8443/healthz returned 200:
	ok
	I0910 19:56:47.937400    8968 round_trippers.go:463] GET https://172.31.215.172:8443/version
	I0910 19:56:47.937400    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:47.937400    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:47.937400    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:47.946102    8968 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 19:56:47.946102    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:47.946102    8968 round_trippers.go:580]     Audit-Id: 713a18e9-dfc4-4639-b141-13fdcfdd6f42
	I0910 19:56:47.946102    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:47.946102    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:47.946102    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:47.946102    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:47.946102    8968 round_trippers.go:580]     Content-Length: 263
	I0910 19:56:47.946102    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:48 GMT
	I0910 19:56:47.946102    8968 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.0",
	  "gitCommit": "9edcffcde5595e8a5b1a35f88c421764e575afce",
	  "gitTreeState": "clean",
	  "buildDate": "2024-08-13T07:28:49Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0910 19:56:47.947107    8968 api_server.go:141] control plane version: v1.31.0
	I0910 19:56:47.947107    8968 api_server.go:131] duration metric: took 4.5265683s to wait for apiserver health ...
	I0910 19:56:47.947107    8968 cni.go:84] Creating CNI manager for ""
	I0910 19:56:47.947107    8968 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0910 19:56:47.949109    8968 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0910 19:56:47.960086    8968 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0910 19:56:47.968545    8968 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0910 19:56:47.968619    8968 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0910 19:56:47.968619    8968 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0910 19:56:47.968619    8968 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0910 19:56:47.968619    8968 command_runner.go:130] > Access: 2024-09-10 19:55:27.060111908 +0000
	I0910 19:56:47.968679    8968 command_runner.go:130] > Modify: 2024-09-10 02:48:06.000000000 +0000
	I0910 19:56:47.968704    8968 command_runner.go:130] > Change: 2024-09-10 19:55:15.625000000 +0000
	I0910 19:56:47.968704    8968 command_runner.go:130] >  Birth: -
	I0910 19:56:47.969691    8968 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0910 19:56:47.969748    8968 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0910 19:56:48.004245    8968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0910 19:56:48.828867    8968 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0910 19:56:48.828961    8968 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0910 19:56:48.828961    8968 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0910 19:56:48.828961    8968 command_runner.go:130] > daemonset.apps/kindnet configured
	I0910 19:56:48.828961    8968 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:56:48.828961    8968 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0910 19:56:48.828961    8968 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0910 19:56:48.828961    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods
	I0910 19:56:48.829485    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:48.829485    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:48.829553    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:48.834199    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:56:48.834238    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:48.834238    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:48.834238    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:48.834238    8968 round_trippers.go:580]     Audit-Id: 5bde7628-1930-4ae6-8932-b38e67eee1fd
	I0910 19:56:48.834238    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:48.834238    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:48.834238    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:48.836465    8968 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1665"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90770 chars]
	I0910 19:56:48.844126    8968 system_pods.go:59] 12 kube-system pods found
	I0910 19:56:48.844126    8968 system_pods.go:61] "coredns-6f6b679f8f-srtv8" [76dd899a-75f4-497d-a6a9-6b263f3a379d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 19:56:48.844126    8968 system_pods.go:61] "etcd-multinode-629100" [2e6b0a6d-5705-427c-89e2-ff2b3ca25cb6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 19:56:48.844126    8968 system_pods.go:61] "kindnet-5crht" [d569a3a6-5b06-4adf-9ac0-294274923906] Running
	I0910 19:56:48.844126    8968 system_pods.go:61] "kindnet-6tdpv" [2c45f0f2-5d24-4ec2-8e6b-06923ea85e78] Running
	I0910 19:56:48.844126    8968 system_pods.go:61] "kindnet-lj2v2" [6643175a-32c5-4461-a441-08b9ac9ba98a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0910 19:56:48.844126    8968 system_pods.go:61] "kube-apiserver-multinode-629100" [5d262302-6bdf-4e67-bf9f-4b50d8f9fbd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 19:56:48.844126    8968 system_pods.go:61] "kube-controller-manager-multinode-629100" [60adcb0c-808a-477c-9432-83dc8f96d6c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 19:56:48.844126    8968 system_pods.go:61] "kube-proxy-4tzx6" [9bb18c28-3ee9-4028-a61d-3d7f6ea31894] Running
	I0910 19:56:48.844126    8968 system_pods.go:61] "kube-proxy-qqrrg" [1fc7fdda-d5e4-4c72-96c1-2348eb72b491] Running
	I0910 19:56:48.844126    8968 system_pods.go:61] "kube-proxy-wqf2d" [27e7846e-e506-48e0-96b9-351b3ca91703] Running
	I0910 19:56:48.844126    8968 system_pods.go:61] "kube-scheduler-multinode-629100" [fe93d1a3-98c4-44c9-b1fb-9d90a97b97c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 19:56:48.844126    8968 system_pods.go:61] "storage-provisioner" [511e6e52-fc2a-4562-9903-42ee1f2e0a2d] Running
	I0910 19:56:48.844126    8968 system_pods.go:74] duration metric: took 15.1645ms to wait for pod list to return data ...
	I0910 19:56:48.844126    8968 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:56:48.844126    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes
	I0910 19:56:48.844126    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:48.844126    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:48.844126    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:48.850348    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:56:48.850348    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:48.850348    8968 round_trippers.go:580]     Audit-Id: 5681b103-bfc7-4d17-b96c-df41bc8d3fc4
	I0910 19:56:48.850348    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:48.850348    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:48.850348    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:48.850779    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:48.850779    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:48.851108    8968 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1665"},"items":[{"metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1640","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15609 chars]
	I0910 19:56:48.852347    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:56:48.852416    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 19:56:48.852416    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:56:48.852416    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 19:56:48.852416    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:56:48.852416    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 19:56:48.852416    8968 node_conditions.go:105] duration metric: took 8.2895ms to run NodePressure ...
	I0910 19:56:48.852486    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:56:48.912312    8968 command_runner.go:130] ! W0910 19:56:49.138787    2436 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:48.913066    8968 command_runner.go:130] ! W0910 19:56:49.139755    2436 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:56:49.159277    8968 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0910 19:56:49.159277    8968 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0910 19:56:49.159277    8968 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0910 19:56:49.159660    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0910 19:56:49.159660    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.159660    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.159660    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.168638    8968 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 19:56:49.168638    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.169647    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.169647    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.169647    8968 round_trippers.go:580]     Audit-Id: b4ed1f29-dcfe-485b-b5f4-0129d38348cf
	I0910 19:56:49.169647    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.169647    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.169647    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.169647    8968 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1667"},"items":[{"metadata":{"name":"etcd-multinode-629100","namespace":"kube-system","uid":"2e6b0a6d-5705-427c-89e2-ff2b3ca25cb6","resourceVersion":"1661","creationTimestamp":"2024-09-10T19:56:47Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.215.172:2379","kubernetes.io/config.hash":"fe5e551808132ccc0e55277539c9741f","kubernetes.io/config.mirror":"fe5e551808132ccc0e55277539c9741f","kubernetes.io/config.seen":"2024-09-10T19:56:41.600229288Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:56:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 31351 chars]
	I0910 19:56:49.170644    8968 kubeadm.go:739] kubelet initialised
	I0910 19:56:49.170644    8968 kubeadm.go:740] duration metric: took 11.3659ms waiting for restarted kubelet to initialise ...
	I0910 19:56:49.170644    8968 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:56:49.170644    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods
	I0910 19:56:49.170644    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.170644    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.170644    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.184625    8968 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0910 19:56:49.184625    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.184625    8968 round_trippers.go:580]     Audit-Id: 60048603-659d-46aa-afad-549ffea23669
	I0910 19:56:49.184625    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.184625    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.184986    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.184986    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.184986    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.186337    8968 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1668"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90582 chars]
	I0910 19:56:49.190549    8968 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace to be "Ready" ...
	I0910 19:56:49.190610    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:56:49.190610    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.190610    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.190610    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.196148    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:56:49.196148    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.196148    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.196148    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.196148    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.196148    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.196148    8968 round_trippers.go:580]     Audit-Id: 30588b13-3bb3-4aaf-ba35-0a4ab5b59092
	I0910 19:56:49.196148    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.197133    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:56:49.197133    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:49.197133    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.197133    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.197133    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.200742    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:56:49.201078    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.201078    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.201078    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.201078    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.201078    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.201078    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.201078    8968 round_trippers.go:580]     Audit-Id: 1aabc449-22db-460e-ac73-1f108c2b2f96
	I0910 19:56:49.201324    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1640","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0910 19:56:49.201324    8968 pod_ready.go:98] node "multinode-629100" hosting pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:49.201324    8968 pod_ready.go:82] duration metric: took 10.7127ms for pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace to be "Ready" ...
	E0910 19:56:49.201324    8968 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-629100" hosting pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:49.201324    8968 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:56:49.201860    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-629100
	I0910 19:56:49.201896    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.201896    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.201896    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.203478    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 19:56:49.203478    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.203478    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.203478    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.203478    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.204469    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.204469    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.204469    8968 round_trippers.go:580]     Audit-Id: 070618bf-0694-4d32-a725-f9b938f84315
	I0910 19:56:49.204658    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-629100","namespace":"kube-system","uid":"2e6b0a6d-5705-427c-89e2-ff2b3ca25cb6","resourceVersion":"1661","creationTimestamp":"2024-09-10T19:56:47Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.215.172:2379","kubernetes.io/config.hash":"fe5e551808132ccc0e55277539c9741f","kubernetes.io/config.mirror":"fe5e551808132ccc0e55277539c9741f","kubernetes.io/config.seen":"2024-09-10T19:56:41.600229288Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:56:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6841 chars]
	I0910 19:56:49.205086    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:49.205147    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.205147    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.205147    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.206601    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 19:56:49.206601    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.206601    8968 round_trippers.go:580]     Audit-Id: 9f48826b-8f2b-4e54-8568-f3c35d420bcd
	I0910 19:56:49.206601    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.206601    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.206601    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.206601    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.206601    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.206601    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1640","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0910 19:56:49.207608    8968 pod_ready.go:98] node "multinode-629100" hosting pod "etcd-multinode-629100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:49.207608    8968 pod_ready.go:82] duration metric: took 6.2843ms for pod "etcd-multinode-629100" in "kube-system" namespace to be "Ready" ...
	E0910 19:56:49.207608    8968 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-629100" hosting pod "etcd-multinode-629100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:49.207608    8968 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:56:49.207608    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-629100
	I0910 19:56:49.207608    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.207608    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.207608    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.209602    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 19:56:49.209602    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.209602    8968 round_trippers.go:580]     Audit-Id: aec56f80-9d90-49e5-bed0-0f85ee99fb57
	I0910 19:56:49.209602    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.209602    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.209602    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.209602    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.209602    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.210587    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-629100","namespace":"kube-system","uid":"5d262302-6bdf-4e67-bf9f-4b50d8f9fbd5","resourceVersion":"1660","creationTimestamp":"2024-09-10T19:56:47Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.215.172:8443","kubernetes.io/config.hash":"0a34f2a2bc072815a118e734662d14b6","kubernetes.io/config.mirror":"0a34f2a2bc072815a118e734662d14b6","kubernetes.io/config.seen":"2024-09-10T19:56:41.591483300Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:56:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8293 chars]
	I0910 19:56:49.210587    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:49.210587    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.210587    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.210587    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.212627    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:56:49.212627    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.212627    8968 round_trippers.go:580]     Audit-Id: 97ced73a-fd63-47bb-bb91-c7b109fb85bc
	I0910 19:56:49.212627    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.212627    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.212627    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.212627    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.212627    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.213585    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1640","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0910 19:56:49.213585    8968 pod_ready.go:98] node "multinode-629100" hosting pod "kube-apiserver-multinode-629100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:49.213585    8968 pod_ready.go:82] duration metric: took 5.9763ms for pod "kube-apiserver-multinode-629100" in "kube-system" namespace to be "Ready" ...
	E0910 19:56:49.213585    8968 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-629100" hosting pod "kube-apiserver-multinode-629100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:49.213585    8968 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:56:49.213585    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-629100
	I0910 19:56:49.213585    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.213585    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.213585    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.215614    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:56:49.215614    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.215614    8968 round_trippers.go:580]     Audit-Id: e1ce9c3a-875a-4aa7-aa58-4e1fe654173f
	I0910 19:56:49.215614    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.215614    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.215614    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.215614    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.215614    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.216610    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-629100","namespace":"kube-system","uid":"60adcb0c-808a-477c-9432-83dc8f96d6c0","resourceVersion":"1648","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2a7614069b6cbf30e1abf0d1e34a8a71","kubernetes.io/config.mirror":"2a7614069b6cbf30e1abf0d1e34a8a71","kubernetes.io/config.seen":"2024-09-10T19:35:40.972009282Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7735 chars]
	I0910 19:56:49.229665    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:49.229944    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.229944    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.229944    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.232597    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:56:49.232597    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.232597    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.232597    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.232597    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.233123    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.233123    8968 round_trippers.go:580]     Audit-Id: 7d6969ee-c088-41fc-8d43-157108eae7d8
	I0910 19:56:49.233123    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.233316    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1640","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0910 19:56:49.233947    8968 pod_ready.go:98] node "multinode-629100" hosting pod "kube-controller-manager-multinode-629100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:49.234020    8968 pod_ready.go:82] duration metric: took 20.434ms for pod "kube-controller-manager-multinode-629100" in "kube-system" namespace to be "Ready" ...
	E0910 19:56:49.234020    8968 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-629100" hosting pod "kube-controller-manager-multinode-629100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:49.234020    8968 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4tzx6" in "kube-system" namespace to be "Ready" ...
	I0910 19:56:49.435280    8968 request.go:632] Waited for 200.908ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tzx6
	I0910 19:56:49.435280    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tzx6
	I0910 19:56:49.435552    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.435552    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.435552    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.438149    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:56:49.438149    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.438149    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.438149    8968 round_trippers.go:580]     Audit-Id: 88339815-590c-4b5a-91e4-0a88ab399b0d
	I0910 19:56:49.438149    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.438149    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.438149    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.438149    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.439515    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4tzx6","generateName":"kube-proxy-","namespace":"kube-system","uid":"9bb18c28-3ee9-4028-a61d-3d7f6ea31894","resourceVersion":"1613","creationTimestamp":"2024-09-10T19:42:55Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:42:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6433 chars]
	I0910 19:56:49.640956    8968 request.go:632] Waited for 200.5808ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 19:56:49.641544    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 19:56:49.641544    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.641652    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.641652    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.645006    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:56:49.645006    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.645006    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.645006    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.645006    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:49 GMT
	I0910 19:56:49.645006    8968 round_trippers.go:580]     Audit-Id: 610818d5-79b1-4fab-93a9-69606dada142
	I0910 19:56:49.645006    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.645006    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.645619    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"39f4665c-e276-4967-a647-46b3a9a1c67a","resourceVersion":"1621","creationTimestamp":"2024-09-10T19:52:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_52_30_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4394 chars]
	I0910 19:56:49.645910    8968 pod_ready.go:98] node "multinode-629100-m03" hosting pod "kube-proxy-4tzx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100-m03" has status "Ready":"Unknown"
	I0910 19:56:49.645910    8968 pod_ready.go:82] duration metric: took 411.8629ms for pod "kube-proxy-4tzx6" in "kube-system" namespace to be "Ready" ...
	E0910 19:56:49.645910    8968 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-629100-m03" hosting pod "kube-proxy-4tzx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100-m03" has status "Ready":"Unknown"
	I0910 19:56:49.646446    8968 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qqrrg" in "kube-system" namespace to be "Ready" ...
	I0910 19:56:49.829425    8968 request.go:632] Waited for 182.6554ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqrrg
	I0910 19:56:49.829921    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqrrg
	I0910 19:56:49.830132    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:49.830132    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:49.830223    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:49.833102    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:56:49.833102    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:49.833102    8968 round_trippers.go:580]     Audit-Id: f90a6a81-f7ba-4bae-95fa-61a9e378eed4
	I0910 19:56:49.833102    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:49.833504    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:49.833504    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:49.833572    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:49.833572    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:50 GMT
	I0910 19:56:49.833919    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qqrrg","generateName":"kube-proxy-","namespace":"kube-system","uid":"1fc7fdda-d5e4-4c72-96c1-2348eb72b491","resourceVersion":"580","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0910 19:56:50.033123    8968 request.go:632] Waited for 199.0453ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:56:50.033123    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:56:50.033369    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:50.033369    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:50.033369    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:50.038790    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:56:50.039102    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:50.039138    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:50.039138    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:50.039138    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:50.039138    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:50 GMT
	I0910 19:56:50.039138    8968 round_trippers.go:580]     Audit-Id: f2939d62-aab8-4279-8dc7-4f7fa6f5986e
	I0910 19:56:50.039138    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:50.039331    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"1311","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3819 chars]
	I0910 19:56:50.039521    8968 pod_ready.go:93] pod "kube-proxy-qqrrg" in "kube-system" namespace has status "Ready":"True"
	I0910 19:56:50.039521    8968 pod_ready.go:82] duration metric: took 393.0491ms for pod "kube-proxy-qqrrg" in "kube-system" namespace to be "Ready" ...
	I0910 19:56:50.039521    8968 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wqf2d" in "kube-system" namespace to be "Ready" ...
	I0910 19:56:50.237594    8968 request.go:632] Waited for 197.8807ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wqf2d
	I0910 19:56:50.237877    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wqf2d
	I0910 19:56:50.237990    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:50.238051    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:50.238051    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:50.240645    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:56:50.241083    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:50.241083    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:50.241083    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:50.241083    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:50.241083    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:50.241083    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:50 GMT
	I0910 19:56:50.241083    8968 round_trippers.go:580]     Audit-Id: 0eaa324e-f4cc-4973-84b9-4f56008eb347
	I0910 19:56:50.241580    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wqf2d","generateName":"kube-proxy-","namespace":"kube-system","uid":"27e7846e-e506-48e0-96b9-351b3ca91703","resourceVersion":"1662","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6405 chars]
	I0910 19:56:50.441118    8968 request.go:632] Waited for 198.7028ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:50.441118    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:50.441118    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:50.441571    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:50.441571    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:50.444669    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:56:50.445101    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:50.445101    8968 round_trippers.go:580]     Audit-Id: 36628f11-df31-4ec0-a36b-20c8fc25aa32
	I0910 19:56:50.445238    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:50.445238    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:50.445238    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:50.445238    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:50.445238    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:50 GMT
	I0910 19:56:50.445486    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:50.446015    8968 pod_ready.go:98] node "multinode-629100" hosting pod "kube-proxy-wqf2d" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:50.446237    8968 pod_ready.go:82] duration metric: took 406.6894ms for pod "kube-proxy-wqf2d" in "kube-system" namespace to be "Ready" ...
	E0910 19:56:50.446237    8968 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-629100" hosting pod "kube-proxy-wqf2d" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:50.446237    8968 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:56:50.644397    8968 request.go:632] Waited for 197.9545ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629100
	I0910 19:56:50.644705    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629100
	I0910 19:56:50.644705    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:50.644705    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:50.644705    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:50.648594    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:56:50.648594    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:50.648594    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:50.648594    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:50.648594    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:50 GMT
	I0910 19:56:50.648594    8968 round_trippers.go:580]     Audit-Id: d75c8e27-365c-4745-accc-36cdaff81962
	I0910 19:56:50.648594    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:50.648594    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:50.649010    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-629100","namespace":"kube-system","uid":"fe93d1a3-98c4-44c9-b1fb-9d90a97b97c7","resourceVersion":"1651","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4f54b0774708cd0189f8ef861ea931f0","kubernetes.io/config.mirror":"4f54b0774708cd0189f8ef861ea931f0","kubernetes.io/config.seen":"2024-09-10T19:35:40.972010383Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0910 19:56:50.831389    8968 request.go:632] Waited for 181.395ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:50.831671    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:50.831671    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:50.831671    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:50.831734    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:50.834450    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:56:50.835140    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:50.835140    8968 round_trippers.go:580]     Audit-Id: a9120c8b-57d6-4483-9cad-bc9469c62666
	I0910 19:56:50.835140    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:50.835140    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:50.835140    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:50.835140    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:50.835140    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:51 GMT
	I0910 19:56:50.835332    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:50.835403    8968 pod_ready.go:98] node "multinode-629100" hosting pod "kube-scheduler-multinode-629100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:50.835403    8968 pod_ready.go:82] duration metric: took 389.1401ms for pod "kube-scheduler-multinode-629100" in "kube-system" namespace to be "Ready" ...
	E0910 19:56:50.835403    8968 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-629100" hosting pod "kube-scheduler-multinode-629100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100" has status "Ready":"False"
	I0910 19:56:50.835403    8968 pod_ready.go:39] duration metric: took 1.6646493s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:56:50.835403    8968 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 19:56:50.852477    8968 command_runner.go:130] > -16
	I0910 19:56:50.852626    8968 ops.go:34] apiserver oom_adj: -16
	I0910 19:56:50.852626    8968 kubeadm.go:597] duration metric: took 11.5389757s to restartPrimaryControlPlane
	I0910 19:56:50.852626    8968 kubeadm.go:394] duration metric: took 11.5997803s to StartCluster
	I0910 19:56:50.852626    8968 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:56:50.852626    8968 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 19:56:50.854513    8968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:56:50.856195    8968 start.go:235] Will wait 6m0s for node &{Name: IP:172.31.215.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 19:56:50.856195    8968 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 19:56:50.856907    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:56:50.862520    8968 out.go:177] * Verifying Kubernetes components...
	I0910 19:56:50.869517    8968 out.go:177] * Enabled addons: 
	I0910 19:56:50.874529    8968 addons.go:510] duration metric: took 18.332ms for enable addons: enabled=[]
	I0910 19:56:50.880774    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:56:51.108147    8968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:56:51.130588    8968 node_ready.go:35] waiting up to 6m0s for node "multinode-629100" to be "Ready" ...
	I0910 19:56:51.130758    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:51.130758    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:51.130758    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:51.130758    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:51.137143    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:56:51.137143    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:51.137143    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:51.137143    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:51.137143    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:51 GMT
	I0910 19:56:51.137143    8968 round_trippers.go:580]     Audit-Id: 5ba524b0-31fe-472b-a3ef-ac53951fa9f0
	I0910 19:56:51.137143    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:51.137143    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:51.137791    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:51.631851    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:51.631940    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:51.631940    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:51.631940    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:51.638814    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:56:51.638814    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:51.638814    8968 round_trippers.go:580]     Audit-Id: 0a9f5919-461f-4f17-857e-bb49bd84b23f
	I0910 19:56:51.638814    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:51.638814    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:51.638814    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:51.638814    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:51.638814    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:51 GMT
	I0910 19:56:51.638814    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:52.133153    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:52.133153    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:52.133153    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:52.133153    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:52.137001    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:56:52.137060    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:52.137060    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:52 GMT
	I0910 19:56:52.137060    8968 round_trippers.go:580]     Audit-Id: b4823266-9636-4448-b6da-23dc33e10017
	I0910 19:56:52.137060    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:52.137060    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:52.137121    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:52.137121    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:52.137177    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:52.631980    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:52.632043    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:52.632101    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:52.632101    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:52.638857    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:56:52.638857    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:52.638857    8968 round_trippers.go:580]     Audit-Id: 37b1bdda-30aa-4b7d-bf4c-4c03a0cab9b5
	I0910 19:56:52.638857    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:52.638857    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:52.638857    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:52.638857    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:52.638857    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:52 GMT
	I0910 19:56:52.638857    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:53.134125    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:53.134188    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:53.134188    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:53.134188    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:53.137674    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:56:53.137743    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:53.137743    8968 round_trippers.go:580]     Audit-Id: b09ba073-af41-4339-84d4-adc646a8a339
	I0910 19:56:53.137743    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:53.137743    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:53.137743    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:53.137743    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:53.137743    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:53 GMT
	I0910 19:56:53.138203    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:53.138874    8968 node_ready.go:53] node "multinode-629100" has status "Ready":"False"
	I0910 19:56:53.641174    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:53.641174    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:53.641174    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:53.641174    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:53.647292    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:56:53.647292    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:53.647292    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:53.647292    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:53.647292    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:53.647292    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:53.647292    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:53 GMT
	I0910 19:56:53.647292    8968 round_trippers.go:580]     Audit-Id: 1b4a2e1f-d521-4bd9-a523-27ee209ada10
	I0910 19:56:53.647858    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:54.139744    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:54.139744    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:54.139744    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:54.139744    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:54.143882    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:56:54.143969    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:54.144029    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:54.144029    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:54.144029    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:54.144029    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:54.144083    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:54 GMT
	I0910 19:56:54.144083    8968 round_trippers.go:580]     Audit-Id: 379a870c-4731-488d-80ea-d974746bc082
	I0910 19:56:54.144462    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:54.638543    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:54.638543    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:54.638543    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:54.638543    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:54.642468    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:56:54.642468    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:54.642468    8968 round_trippers.go:580]     Audit-Id: 66a50367-084b-4ed6-8acb-b459d3ace642
	I0910 19:56:54.642468    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:54.642468    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:54.642468    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:54.642468    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:54.642468    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:54 GMT
	I0910 19:56:54.642468    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:55.141354    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:55.141443    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:55.141443    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:55.141535    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:55.144594    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:56:55.144676    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:55.144676    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:55.144676    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:55 GMT
	I0910 19:56:55.144739    8968 round_trippers.go:580]     Audit-Id: 9df6f54f-d6b0-48bc-9edf-0006fea95abc
	I0910 19:56:55.144739    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:55.144739    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:55.144739    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:55.145047    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:55.145625    8968 node_ready.go:53] node "multinode-629100" has status "Ready":"False"
	I0910 19:56:55.637989    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:55.638050    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:55.638050    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:55.638050    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:55.643593    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:56:55.643593    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:55.643593    8968 round_trippers.go:580]     Audit-Id: eab64fe0-d630-451a-bf35-042dfc92114f
	I0910 19:56:55.643593    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:55.643593    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:55.643593    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:55.643593    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:55.643593    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:55 GMT
	I0910 19:56:55.644283    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:56.140105    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:56.140175    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:56.140175    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:56.140175    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:56.147767    8968 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:56:56.147836    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:56.147836    8968 round_trippers.go:580]     Audit-Id: b95ae3ab-2671-46e3-9bdb-e3e412cf819d
	I0910 19:56:56.147836    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:56.147836    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:56.147836    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:56.147836    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:56.147836    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:56 GMT
	I0910 19:56:56.147836    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:56.635950    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:56.636022    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:56.636022    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:56.636022    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:56.639115    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:56:56.639115    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:56.639115    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:56.639115    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:56.639115    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:56.639115    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:56 GMT
	I0910 19:56:56.639115    8968 round_trippers.go:580]     Audit-Id: a3b0ba40-1882-4975-a54b-d8e5dd574b26
	I0910 19:56:56.639115    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:56.639509    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:57.136806    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:57.136873    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:57.136873    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:57.136873    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:57.143717    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:56:57.143717    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:57.143717    8968 round_trippers.go:580]     Audit-Id: 5ec287db-d568-41e8-857c-673761bafc15
	I0910 19:56:57.143717    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:57.143717    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:57.143717    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:57.143717    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:57.143717    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:57 GMT
	I0910 19:56:57.144539    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:57.633592    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:57.633913    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:57.633913    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:57.633913    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:57.640009    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:56:57.640009    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:57.640009    8968 round_trippers.go:580]     Audit-Id: 4ec62175-9d1c-4b54-af84-280804b1cf3a
	I0910 19:56:57.640009    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:57.640009    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:57.640009    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:57.640009    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:57.640009    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:57 GMT
	I0910 19:56:57.640009    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:57.641359    8968 node_ready.go:53] node "multinode-629100" has status "Ready":"False"
	I0910 19:56:58.135035    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:58.135107    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:58.135107    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:58.135107    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:58.139609    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:56:58.139676    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:58.139740    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:58.139740    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:58.139740    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:58 GMT
	I0910 19:56:58.139740    8968 round_trippers.go:580]     Audit-Id: 503ec55e-b23f-48e3-ad14-157b56843cf9
	I0910 19:56:58.139740    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:58.139740    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:58.140208    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:58.633187    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:58.633276    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:58.633352    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:58.633352    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:58.636949    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:56:58.637085    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:58.637085    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:58.637085    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:58.637191    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:58 GMT
	I0910 19:56:58.637191    8968 round_trippers.go:580]     Audit-Id: 7d791441-817d-4ce0-be03-32b2ccefe611
	I0910 19:56:58.637191    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:58.637191    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:58.637443    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:59.134544    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:59.134620    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:59.134692    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:59.134692    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:59.138350    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:56:59.138425    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:59.138425    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:59.138425    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:59 GMT
	I0910 19:56:59.138425    8968 round_trippers.go:580]     Audit-Id: 1dc257f7-72c5-47b9-b154-63792293e7a5
	I0910 19:56:59.138493    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:59.138493    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:59.138493    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:59.139096    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:59.633419    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:56:59.633519    8968 round_trippers.go:469] Request Headers:
	I0910 19:56:59.633607    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:56:59.633607    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:56:59.641920    8968 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 19:56:59.641920    8968 round_trippers.go:577] Response Headers:
	I0910 19:56:59.641920    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:56:59.641920    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:56:59 GMT
	I0910 19:56:59.641920    8968 round_trippers.go:580]     Audit-Id: f59364d6-90db-4736-bd70-08c28e04e7c3
	I0910 19:56:59.641920    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:56:59.641920    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:56:59.641920    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:56:59.641920    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:56:59.643175    8968 node_ready.go:53] node "multinode-629100" has status "Ready":"False"
	I0910 19:57:00.135059    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:00.135059    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:00.135059    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:00.135059    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:00.140067    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:57:00.140067    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:00.140067    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:00.140067    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:00.140067    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:00 GMT
	I0910 19:57:00.140067    8968 round_trippers.go:580]     Audit-Id: 9c6a2367-c266-476a-9773-2ac19254fe4a
	I0910 19:57:00.141072    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:00.141072    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:00.141072    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:57:00.634563    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:00.634563    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:00.634563    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:00.634563    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:00.637175    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:00.637175    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:00.637175    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:00.637175    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:00.637175    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:00.637175    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:00.638195    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:00 GMT
	I0910 19:57:00.638195    8968 round_trippers.go:580]     Audit-Id: 847abe16-33be-4e40-be91-2962afa4411d
	I0910 19:57:00.638288    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:57:01.133641    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:01.133712    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:01.133784    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:01.133784    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:01.137461    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:01.137461    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:01.137461    8968 round_trippers.go:580]     Audit-Id: f1f0bf41-87a3-4706-b193-b4b89590a6e0
	I0910 19:57:01.137461    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:01.138001    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:01.138001    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:01.138001    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:01.138001    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:01 GMT
	I0910 19:57:01.138450    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:57:01.634031    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:01.634031    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:01.634031    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:01.634031    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:01.638655    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:01.638655    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:01.638655    8968 round_trippers.go:580]     Audit-Id: 0d6bc5ef-8af3-4790-8c90-b7e06dc933d0
	I0910 19:57:01.638655    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:01.638655    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:01.638655    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:01.638655    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:01.638655    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:01 GMT
	I0910 19:57:01.638981    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:57:02.132646    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:02.133079    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:02.133079    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:02.133202    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:02.136368    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:02.136368    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:02.136368    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:02.136982    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:02.136982    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:02.136982    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:02 GMT
	I0910 19:57:02.136982    8968 round_trippers.go:580]     Audit-Id: 89dfd954-7388-4011-940e-7f0cf59cc252
	I0910 19:57:02.136982    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:02.137291    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1675","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5574 chars]
	I0910 19:57:02.137770    8968 node_ready.go:53] node "multinode-629100" has status "Ready":"False"
	I0910 19:57:02.635339    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:02.635339    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:02.635339    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:02.635339    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:02.637923    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:02.637923    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:02.637923    8968 round_trippers.go:580]     Audit-Id: 5c856fa1-2dc2-4213-bdd2-5bf738880af1
	I0910 19:57:02.637923    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:02.637923    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:02.637923    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:02.638334    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:02.638334    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:02 GMT
	I0910 19:57:02.638451    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1776","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5351 chars]
	I0910 19:57:02.639518    8968 node_ready.go:49] node "multinode-629100" has status "Ready":"True"
	I0910 19:57:02.639572    8968 node_ready.go:38] duration metric: took 11.5081312s for node "multinode-629100" to be "Ready" ...
	I0910 19:57:02.639572    8968 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:57:02.639795    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods
	I0910 19:57:02.639861    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:02.639923    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:02.639923    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:02.644089    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:02.644648    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:02.644648    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:02.644717    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:02.644717    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:02.644787    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:02 GMT
	I0910 19:57:02.644787    8968 round_trippers.go:580]     Audit-Id: 595c607e-99a5-43d1-8b95-fec833029bd3
	I0910 19:57:02.644787    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:02.646590    8968 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1776"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89610 chars]
	I0910 19:57:02.650933    8968 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:02.650933    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:02.650933    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:02.650933    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:02.650933    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:02.657435    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:57:02.657435    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:02.657435    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:02.657435    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:02.657435    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:02 GMT
	I0910 19:57:02.657435    8968 round_trippers.go:580]     Audit-Id: 36ce0d26-1a3e-4d92-805b-902e35b3b2ba
	I0910 19:57:02.657435    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:02.657435    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:02.657774    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:02.658296    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:02.658366    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:02.658366    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:02.658366    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:02.659933    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 19:57:02.659933    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:02.659933    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:02 GMT
	I0910 19:57:02.659933    8968 round_trippers.go:580]     Audit-Id: 5f259b94-e7b3-44a4-a9e8-aedef89105c0
	I0910 19:57:02.659933    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:02.659933    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:02.659933    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:02.659933    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:02.660917    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1776","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5351 chars]
	I0910 19:57:03.152919    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:03.152990    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:03.152990    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:03.152990    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:03.159846    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:57:03.159846    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:03.159846    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:03.159846    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:03.159846    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:03.159846    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:03 GMT
	I0910 19:57:03.159846    8968 round_trippers.go:580]     Audit-Id: 0f7b3891-6c5c-4eeb-90cb-1182fa12c93b
	I0910 19:57:03.159846    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:03.159846    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:03.161990    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:03.162095    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:03.162145    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:03.162145    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:03.165259    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:03.165259    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:03.165259    8968 round_trippers.go:580]     Audit-Id: 40049601-b492-442c-84e3-5bff85588b73
	I0910 19:57:03.165259    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:03.165259    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:03.165259    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:03.165259    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:03.165259    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:03 GMT
	I0910 19:57:03.165259    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1776","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5351 chars]
	I0910 19:57:03.653111    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:03.653175    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:03.653239    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:03.653239    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:03.657009    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:03.657009    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:03.657009    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:03.657009    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:03 GMT
	I0910 19:57:03.657009    8968 round_trippers.go:580]     Audit-Id: e56df73a-0297-439c-9d28-3145bb5b3e23
	I0910 19:57:03.657009    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:03.657382    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:03.657382    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:03.657612    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:03.658496    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:03.658496    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:03.658496    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:03.658496    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:03.665630    8968 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:57:03.666162    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:03.666162    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:03.666162    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:03.666162    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:03 GMT
	I0910 19:57:03.666202    8968 round_trippers.go:580]     Audit-Id: 94443d94-b56e-46c7-a5f2-670871731336
	I0910 19:57:03.666202    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:03.666202    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:03.667688    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1776","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5351 chars]
	I0910 19:57:04.166185    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:04.166185    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:04.166185    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:04.166295    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:04.170560    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:04.170560    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:04.170676    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:04.170676    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:04.170676    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:04.170676    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:04 GMT
	I0910 19:57:04.170676    8968 round_trippers.go:580]     Audit-Id: b0199659-fd08-4dd9-9a7b-be9e0dc2a8bd
	I0910 19:57:04.170676    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:04.170962    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:04.171926    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:04.171926    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:04.171926    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:04.172047    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:04.174348    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:04.174348    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:04.175041    8968 round_trippers.go:580]     Audit-Id: 37a63722-2733-43a2-ab61-3cc692c0f2dd
	I0910 19:57:04.175041    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:04.175041    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:04.175041    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:04.175041    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:04.175041    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:04 GMT
	I0910 19:57:04.175253    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1776","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5351 chars]
	I0910 19:57:04.665724    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:04.665724    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:04.665724    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:04.665724    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:04.669283    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:04.670301    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:04.670367    8968 round_trippers.go:580]     Audit-Id: c4f8a826-aa15-4c44-a37e-bea5d938009b
	I0910 19:57:04.670367    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:04.670367    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:04.670367    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:04.670428    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:04.670428    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:04 GMT
	I0910 19:57:04.670990    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:04.671976    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:04.672075    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:04.672075    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:04.672075    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:04.676127    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:04.676127    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:04.676127    8968 round_trippers.go:580]     Audit-Id: de74ce9d-6a94-4f49-a844-4308649e54b5
	I0910 19:57:04.676127    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:04.676127    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:04.676127    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:04.676127    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:04.676127    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:04 GMT
	I0910 19:57:04.677076    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1776","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5351 chars]
	I0910 19:57:04.677076    8968 pod_ready.go:103] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"False"
	I0910 19:57:05.154213    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:05.154213    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:05.154213    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:05.154213    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:05.158116    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:05.158116    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:05.158116    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:05.158116    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:05.158116    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:05.158116    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:05 GMT
	I0910 19:57:05.158116    8968 round_trippers.go:580]     Audit-Id: f72016e0-7ad5-4d4e-9980-01c9b3ee9b13
	I0910 19:57:05.158116    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:05.158116    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:05.159582    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:05.159582    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:05.159704    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:05.159704    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:05.162059    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:05.162059    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:05.162059    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:05.162059    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:05.162059    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:05.162059    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:05.162059    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:05 GMT
	I0910 19:57:05.162059    8968 round_trippers.go:580]     Audit-Id: 34c5d5df-9594-4789-8634-d5e961ac34d2
	I0910 19:57:05.163330    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:05.655895    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:05.655895    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:05.655895    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:05.655895    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:05.659453    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:05.659708    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:05.659708    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:05 GMT
	I0910 19:57:05.659708    8968 round_trippers.go:580]     Audit-Id: d879c180-0592-49b7-b07e-4ad2562050d2
	I0910 19:57:05.659708    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:05.659708    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:05.659708    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:05.659708    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:05.659870    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:05.661144    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:05.661219    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:05.661219    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:05.661219    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:05.663516    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:05.663516    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:05.663516    8968 round_trippers.go:580]     Audit-Id: e7ab7840-5f70-4f14-9630-6a052ba7cfd0
	I0910 19:57:05.664175    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:05.664175    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:05.664175    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:05.664175    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:05.664175    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:05 GMT
	I0910 19:57:05.664471    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:06.162728    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:06.162795    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:06.162795    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:06.162849    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:06.166960    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:06.167022    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:06.167022    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:06.167022    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:06.167101    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:06.167101    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:06.167101    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:06 GMT
	I0910 19:57:06.167101    8968 round_trippers.go:580]     Audit-Id: 074c168c-e064-4a57-a1ba-99babc6bd23d
	I0910 19:57:06.167517    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:06.168629    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:06.168629    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:06.168714    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:06.168714    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:06.172868    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:06.172868    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:06.172868    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:06.172868    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:06.172868    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:06.172868    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:06 GMT
	I0910 19:57:06.172868    8968 round_trippers.go:580]     Audit-Id: a544011e-f90a-4764-8591-097da25fc39a
	I0910 19:57:06.172868    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:06.172868    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:06.660918    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:06.660918    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:06.660918    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:06.660918    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:06.665601    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:06.665601    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:06.665601    8968 round_trippers.go:580]     Audit-Id: 87a03276-235e-4123-8929-7f3dc8cc5edf
	I0910 19:57:06.665601    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:06.665601    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:06.665601    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:06.665601    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:06.665601    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:06 GMT
	I0910 19:57:06.665601    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:06.667104    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:06.667104    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:06.667162    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:06.667162    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:06.669723    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:06.670634    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:06.670634    8968 round_trippers.go:580]     Audit-Id: e8ddae4c-0866-4407-a182-835791a5a2b8
	I0910 19:57:06.670634    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:06.670663    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:06.670663    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:06.670663    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:06.670663    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:06 GMT
	I0910 19:57:06.670663    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:07.161575    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:07.161671    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:07.161671    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:07.161747    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:07.167076    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:07.167076    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:07.167076    8968 round_trippers.go:580]     Audit-Id: c4b102f2-53c0-43c1-b4bb-baa59cb464da
	I0910 19:57:07.167076    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:07.167076    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:07.167076    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:07.167076    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:07.167076    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:07 GMT
	I0910 19:57:07.167325    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:07.167965    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:07.167965    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:07.167965    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:07.167965    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:07.170194    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:07.170435    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:07.170435    8968 round_trippers.go:580]     Audit-Id: 45c9cda7-5749-460c-aabf-d8a40002e471
	I0910 19:57:07.170435    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:07.170435    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:07.170435    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:07.170435    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:07.170435    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:07 GMT
	I0910 19:57:07.171081    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:07.171788    8968 pod_ready.go:103] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"False"
	I0910 19:57:07.663731    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:07.663795    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:07.663943    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:07.663943    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:07.667382    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:07.667382    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:07.667382    8968 round_trippers.go:580]     Audit-Id: 6fc423df-0aa7-4b9c-bf28-4634cc56a4d3
	I0910 19:57:07.667382    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:07.667382    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:07.667382    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:07.667382    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:07.667382    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:07 GMT
	I0910 19:57:07.668421    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:07.669076    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:07.669165    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:07.669165    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:07.669165    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:07.672311    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:07.672311    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:07.672311    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:07 GMT
	I0910 19:57:07.672311    8968 round_trippers.go:580]     Audit-Id: 56018a0f-7b0c-4fa9-a0a2-4d1ac2e7798b
	I0910 19:57:07.672311    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:07.672311    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:07.672311    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:07.672311    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:07.673391    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:08.162015    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:08.162015    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:08.162015    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:08.162134    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:08.167851    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:57:08.167851    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:08.167851    8968 round_trippers.go:580]     Audit-Id: df7f25ab-7dc2-4e72-bde5-691f76002fdc
	I0910 19:57:08.167851    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:08.167851    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:08.167851    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:08.167851    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:08.167851    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:08 GMT
	I0910 19:57:08.168481    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:08.169338    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:08.169338    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:08.169338    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:08.169338    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:08.172493    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:08.172493    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:08.172493    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:08 GMT
	I0910 19:57:08.172493    8968 round_trippers.go:580]     Audit-Id: 1ccd31f0-9804-495a-97db-c3f27e17340e
	I0910 19:57:08.172493    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:08.172493    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:08.172493    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:08.172493    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:08.173115    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:08.665445    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:08.665445    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:08.665445    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:08.665555    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:08.674321    8968 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 19:57:08.674321    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:08.674321    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:08.674321    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:08.674321    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:08.674321    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:08 GMT
	I0910 19:57:08.674321    8968 round_trippers.go:580]     Audit-Id: e5c3d861-5957-413b-b65e-e7692573c8f4
	I0910 19:57:08.674321    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:08.674321    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:08.675624    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:08.675715    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:08.675715    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:08.675715    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:08.678633    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:08.678696    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:08.678696    8968 round_trippers.go:580]     Audit-Id: 92cc3c85-c723-42a2-b974-b3bdde4ef8f6
	I0910 19:57:08.678696    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:08.678696    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:08.678696    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:08.678696    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:08.678696    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:08 GMT
	I0910 19:57:08.678696    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:09.166401    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:09.166470    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:09.166470    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:09.166470    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:09.173896    8968 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:57:09.173896    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:09.173896    8968 round_trippers.go:580]     Audit-Id: 33c30ae4-c3e7-4e32-abff-338ce8d4e9fe
	I0910 19:57:09.173896    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:09.173896    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:09.173896    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:09.173896    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:09.173896    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:09 GMT
	I0910 19:57:09.173896    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:09.174911    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:09.174911    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:09.174911    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:09.174911    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:09.178137    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:09.178137    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:09.178137    8968 round_trippers.go:580]     Audit-Id: 4b47d08b-9048-4e2a-a132-39af04121df0
	I0910 19:57:09.178137    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:09.178137    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:09.179064    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:09.179064    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:09.179064    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:09 GMT
	I0910 19:57:09.179256    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:09.179628    8968 pod_ready.go:103] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"False"
	I0910 19:57:09.663018    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:09.663139    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:09.663139    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:09.663139    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:09.667682    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:09.667682    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:09.667682    8968 round_trippers.go:580]     Audit-Id: 538ed9ae-ca46-4e9f-8fb5-6360f752f073
	I0910 19:57:09.667796    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:09.667796    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:09.667796    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:09.667796    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:09.667796    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:09 GMT
	I0910 19:57:09.667872    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:09.669110    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:09.669110    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:09.669110    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:09.669208    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:09.671579    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:09.671579    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:09.671579    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:09.671579    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:09.671579    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:09.671579    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:09.671579    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:09 GMT
	I0910 19:57:09.672291    8968 round_trippers.go:580]     Audit-Id: 5f435f97-a27c-450c-97ef-a1fc16d2eba2
	I0910 19:57:09.672771    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:10.162577    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:10.162577    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:10.162679    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:10.162679    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:10.168925    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:57:10.168925    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:10.168925    8968 round_trippers.go:580]     Audit-Id: 908d4d31-5558-463b-aa8e-ad1240ae1d18
	I0910 19:57:10.168925    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:10.168925    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:10.168925    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:10.168925    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:10.168925    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:10 GMT
	I0910 19:57:10.169552    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:10.170274    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:10.170274    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:10.170274    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:10.170274    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:10.173478    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:10.173478    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:10.173478    8968 round_trippers.go:580]     Audit-Id: 8bb03296-54bc-43dc-92e0-8686cde45563
	I0910 19:57:10.173478    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:10.173478    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:10.173478    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:10.173478    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:10.173478    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:10 GMT
	I0910 19:57:10.174021    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:10.664901    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:10.664963    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:10.664963    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:10.664963    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:10.668405    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:10.668405    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:10.668405    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:10.668405    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:10.668405    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:10.668405    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:10 GMT
	I0910 19:57:10.668405    8968 round_trippers.go:580]     Audit-Id: 4739744b-3f03-4c6f-b194-1dbd22c70c0b
	I0910 19:57:10.668405    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:10.668405    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:10.669414    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:10.669414    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:10.669414    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:10.669414    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:10.671996    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:10.671996    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:10.671996    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:10.671996    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:10.672289    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:10.672289    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:10.672289    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:10 GMT
	I0910 19:57:10.672289    8968 round_trippers.go:580]     Audit-Id: ca76f3d8-e739-4b3e-bbfa-ec1bdfedaab3
	I0910 19:57:10.672702    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:11.164961    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:11.164961    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:11.164961    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:11.165331    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:11.170791    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:57:11.170791    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:11.170791    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:11.170791    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:11.170791    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:11.170791    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:11 GMT
	I0910 19:57:11.170791    8968 round_trippers.go:580]     Audit-Id: 2651ad92-522f-49f1-a35f-3bc1a182c90f
	I0910 19:57:11.170791    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:11.171324    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:11.171456    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:11.171456    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:11.171456    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:11.171456    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:11.174509    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:11.174509    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:11.174509    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:11.174509    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:11.174509    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:11 GMT
	I0910 19:57:11.174509    8968 round_trippers.go:580]     Audit-Id: fc5924de-a40c-4c49-96d1-ea521c321d69
	I0910 19:57:11.174509    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:11.174509    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:11.174509    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:11.664773    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:11.664868    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:11.664868    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:11.664868    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:11.668928    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:11.668928    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:11.668928    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:11.668928    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:11.668928    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:11 GMT
	I0910 19:57:11.668928    8968 round_trippers.go:580]     Audit-Id: fac53fba-93b1-498c-b103-553508a2bf5a
	I0910 19:57:11.668928    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:11.668928    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:11.669632    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:11.670430    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:11.670430    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:11.670541    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:11.670541    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:11.676792    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:57:11.677744    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:11.677744    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:11 GMT
	I0910 19:57:11.677744    8968 round_trippers.go:580]     Audit-Id: d5d3896d-9558-48bc-b53a-90de8aa7d16e
	I0910 19:57:11.677744    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:11.677744    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:11.677744    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:11.677744    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:11.677744    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:11.678320    8968 pod_ready.go:103] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"False"
	I0910 19:57:12.164258    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:12.164329    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:12.164399    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:12.164399    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:12.170506    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:57:12.170506    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:12.170506    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:12 GMT
	I0910 19:57:12.170506    8968 round_trippers.go:580]     Audit-Id: 9fc5a8ed-a075-45b7-af0a-48458ba11339
	I0910 19:57:12.170506    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:12.170506    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:12.170506    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:12.170506    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:12.170742    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:12.171466    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:12.171466    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:12.171466    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:12.171466    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:12.174026    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:12.174026    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:12.174927    8968 round_trippers.go:580]     Audit-Id: 6f46e104-723e-438a-9c6b-971abd82c1bd
	I0910 19:57:12.174927    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:12.174927    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:12.174927    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:12.174927    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:12.174927    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:12 GMT
	I0910 19:57:12.175389    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:12.664950    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:12.665007    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:12.665007    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:12.665007    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:12.671770    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:57:12.671770    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:12.672755    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:12.672755    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:12.672755    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:12.672755    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:12.672755    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:12 GMT
	I0910 19:57:12.672755    8968 round_trippers.go:580]     Audit-Id: b67cec37-a3f9-4b37-88d1-248a33b343f5
	I0910 19:57:12.672755    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:12.673515    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:12.673597    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:12.673597    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:12.673597    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:12.676674    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:12.676771    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:12.676831    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:12.676831    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:12 GMT
	I0910 19:57:12.676831    8968 round_trippers.go:580]     Audit-Id: ed6690ce-4983-4b73-bb3f-493a788d08a1
	I0910 19:57:12.676900    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:12.676900    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:12.676900    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:12.677241    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:13.167259    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:13.167259    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:13.167259    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:13.167259    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:13.171440    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:13.171440    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:13.171440    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:13 GMT
	I0910 19:57:13.171440    8968 round_trippers.go:580]     Audit-Id: 8ddc3fd2-6b33-4525-8f16-77a0ca6a7c56
	I0910 19:57:13.171440    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:13.171440    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:13.171440    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:13.171440    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:13.171719    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:13.172785    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:13.172785    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:13.172785    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:13.172785    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:13.175369    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:13.175369    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:13.175369    8968 round_trippers.go:580]     Audit-Id: f47b60f6-4dcc-47a1-8677-0f5bbbd24c7e
	I0910 19:57:13.175369    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:13.175369    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:13.176293    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:13.176293    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:13.176293    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:13 GMT
	I0910 19:57:13.176341    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:13.665292    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:13.665292    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:13.665292    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:13.665292    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:13.668490    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:13.669523    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:13.669571    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:13 GMT
	I0910 19:57:13.669571    8968 round_trippers.go:580]     Audit-Id: 201f37da-e4a6-4ef2-a216-6c88d4c9b391
	I0910 19:57:13.669571    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:13.669571    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:13.669571    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:13.669571    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:13.669571    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:13.670209    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:13.670209    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:13.670338    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:13.670338    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:13.672693    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:13.673076    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:13.673141    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:13.673141    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:13.673141    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:13.673141    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:13.673141    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:13 GMT
	I0910 19:57:13.673141    8968 round_trippers.go:580]     Audit-Id: 2a1a7c49-1e0e-4df3-bb12-8bdfa10139cb
	I0910 19:57:13.673141    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:14.162355    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:14.162418    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:14.162418    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:14.162481    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:14.168551    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:57:14.168551    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:14.168551    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:14 GMT
	I0910 19:57:14.168551    8968 round_trippers.go:580]     Audit-Id: 87a5c399-d481-421b-a451-f9393e4281d9
	I0910 19:57:14.168551    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:14.168551    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:14.168551    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:14.168551    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:14.169224    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:14.169224    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:14.169224    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:14.169224    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:14.169224    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:14.172551    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:14.173230    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:14.173230    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:14 GMT
	I0910 19:57:14.173230    8968 round_trippers.go:580]     Audit-Id: 85cecbff-3a79-49a8-a15a-cfa61c56e91b
	I0910 19:57:14.173230    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:14.173230    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:14.173230    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:14.173230    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:14.173514    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:14.173920    8968 pod_ready.go:103] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"False"
	I0910 19:57:14.661247    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:14.661247    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:14.661247    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:14.661247    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:14.665282    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:14.665282    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:14.665282    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:14 GMT
	I0910 19:57:14.665282    8968 round_trippers.go:580]     Audit-Id: 76664fc4-4d08-4238-b02a-890a384187fd
	I0910 19:57:14.665282    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:14.665282    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:14.665282    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:14.665282    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:14.665282    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:14.666689    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:14.666774    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:14.666774    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:14.666774    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:14.670081    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:14.670081    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:14.670081    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:14.670456    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:14.670456    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:14 GMT
	I0910 19:57:14.670456    8968 round_trippers.go:580]     Audit-Id: 0497702e-96e6-429e-84f0-4c2bd31b6fea
	I0910 19:57:14.670456    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:14.670456    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:14.670740    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:15.164938    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:15.165004    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:15.165004    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:15.165072    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:15.168474    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:15.168474    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:15.168474    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:15.168474    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:15.168474    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:15 GMT
	I0910 19:57:15.168474    8968 round_trippers.go:580]     Audit-Id: 72518ddd-7e61-4fee-acd2-2c0a18c5cdb0
	I0910 19:57:15.168474    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:15.168474    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:15.169129    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:15.169685    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:15.169685    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:15.169768    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:15.169768    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:15.172919    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:15.172919    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:15.172919    8968 round_trippers.go:580]     Audit-Id: 48411778-c64f-47d6-ae5a-3d02f348b8f9
	I0910 19:57:15.172919    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:15.172919    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:15.172919    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:15.172919    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:15.172919    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:15 GMT
	I0910 19:57:15.173502    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:15.662479    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:15.662479    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:15.662479    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:15.662479    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:15.666282    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:15.666282    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:15.666282    8968 round_trippers.go:580]     Audit-Id: 659ea1bb-c566-4acc-a194-d9437c3936c2
	I0910 19:57:15.666282    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:15.666282    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:15.666282    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:15.666282    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:15.666282    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:15 GMT
	I0910 19:57:15.666282    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:15.667164    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:15.667164    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:15.667164    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:15.667164    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:15.669945    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 19:57:15.669945    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:15.669945    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:15.669945    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:15 GMT
	I0910 19:57:15.669945    8968 round_trippers.go:580]     Audit-Id: 1f35f206-aea0-47c8-93ca-5a5d87ac5871
	I0910 19:57:15.669945    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:15.669945    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:15.669945    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:15.669945    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:16.160657    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:16.160720    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:16.160720    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:16.160720    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:16.164050    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:16.164050    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:16.164535    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:16.164535    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:16.164535    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:16.164535    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:16 GMT
	I0910 19:57:16.164535    8968 round_trippers.go:580]     Audit-Id: 586fad75-2a87-4a46-93d9-1ffee52298d8
	I0910 19:57:16.164535    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:16.164987    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:16.166010    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:16.166010    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:16.166010    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:16.166010    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:16.168580    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:16.168580    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:16.168580    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:16 GMT
	I0910 19:57:16.168580    8968 round_trippers.go:580]     Audit-Id: 6f13298f-932b-464f-b109-2d16630fae59
	I0910 19:57:16.168580    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:16.168580    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:16.168580    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:16.168580    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:16.169459    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:16.665678    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:16.665758    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:16.665758    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:16.665758    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:16.669530    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:16.669530    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:16.669530    8968 round_trippers.go:580]     Audit-Id: 72b94d77-2045-4bcb-b34c-909dca1615a1
	I0910 19:57:16.669530    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:16.669530    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:16.669530    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:16.669530    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:16.669750    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:16 GMT
	I0910 19:57:16.669972    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:16.671100    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:16.671156    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:16.671209    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:16.671209    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:16.673558    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:16.673558    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:16.673558    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:16.673558    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:16.674516    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:16.674516    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:16.674516    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:16 GMT
	I0910 19:57:16.674516    8968 round_trippers.go:580]     Audit-Id: 34c8bae0-36d4-49c0-8aa5-4888063bf09d
	I0910 19:57:16.674813    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:16.675047    8968 pod_ready.go:103] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"False"
	I0910 19:57:17.164482    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:17.164552    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:17.164552    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:17.164602    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:17.168302    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:17.168358    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:17.168358    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:17.168358    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:17.168358    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:17.168358    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:17.168358    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:17 GMT
	I0910 19:57:17.168358    8968 round_trippers.go:580]     Audit-Id: 60b4c75b-c960-4c7f-aecf-8e44f5e67bef
	I0910 19:57:17.168358    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:17.169135    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:17.169135    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:17.169135    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:17.169135    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:17.171381    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:17.172289    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:17.172289    8968 round_trippers.go:580]     Audit-Id: 88fd094a-3431-43df-a677-132d108b26d3
	I0910 19:57:17.172289    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:17.172289    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:17.172289    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:17.172289    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:17.172289    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:17 GMT
	I0910 19:57:17.172559    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:17.661204    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:17.661328    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:17.661328    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:17.661328    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:17.664780    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:17.664780    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:17.664780    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:17.664780    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:17.664780    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:17 GMT
	I0910 19:57:17.665397    8968 round_trippers.go:580]     Audit-Id: a6b55a40-d22e-4f78-8e4f-6c672597c645
	I0910 19:57:17.665397    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:17.665452    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:17.665646    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:17.666722    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:17.666722    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:17.666805    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:17.666805    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:17.669747    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:17.669747    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:17.669854    8968 round_trippers.go:580]     Audit-Id: 1ad5007e-482d-4d17-b8de-55e8f0a58b90
	I0910 19:57:17.669854    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:17.669854    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:17.669854    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:17.669944    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:17.669944    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:17 GMT
	I0910 19:57:17.670372    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:18.160359    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:18.160431    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:18.160511    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:18.160511    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:18.165990    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:57:18.165990    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:18.165990    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:18.165990    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:18.165990    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:18 GMT
	I0910 19:57:18.165990    8968 round_trippers.go:580]     Audit-Id: ddbb3f3c-1c4f-47ca-840f-02e9aff59517
	I0910 19:57:18.165990    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:18.165990    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:18.166524    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:18.167502    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:18.167502    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:18.167502    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:18.167502    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:18.170421    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:18.170421    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:18.170421    8968 round_trippers.go:580]     Audit-Id: 0c6fc69a-90e3-45b4-8332-1685d4d1b3ca
	I0910 19:57:18.170421    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:18.170421    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:18.170421    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:18.170421    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:18.170421    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:18 GMT
	I0910 19:57:18.170421    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:18.665949    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:18.666011    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:18.666011    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:18.666075    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:18.669374    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:18.670040    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:18.670040    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:18.670040    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:18.670130    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:18 GMT
	I0910 19:57:18.670130    8968 round_trippers.go:580]     Audit-Id: ec22695c-1bf4-416e-a880-e3fa01eff647
	I0910 19:57:18.670130    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:18.670130    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:18.670397    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:18.671541    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:18.671541    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:18.671541    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:18.671541    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:18.676142    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:18.676142    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:18.676142    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:18.676142    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:18.676142    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:18.676142    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:18 GMT
	I0910 19:57:18.676142    8968 round_trippers.go:580]     Audit-Id: 718e92c7-6101-4a00-8756-b807dc5c5b92
	I0910 19:57:18.676142    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:18.676940    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:18.677362    8968 pod_ready.go:103] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"False"
	I0910 19:57:19.160212    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:19.160212    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:19.160212    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:19.160212    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:19.164603    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:19.164603    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:19.164603    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:19.164765    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:19 GMT
	I0910 19:57:19.164765    8968 round_trippers.go:580]     Audit-Id: d5e55e1c-ba66-40ec-a617-e99d391f6b00
	I0910 19:57:19.164765    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:19.164765    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:19.164830    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:19.165018    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:19.165613    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:19.165613    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:19.166133    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:19.166171    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:19.170431    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:19.170520    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:19.170520    8968 round_trippers.go:580]     Audit-Id: 2259e8fe-3ad6-46db-a019-471e10feb1cb
	I0910 19:57:19.170571    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:19.170571    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:19.170571    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:19.170571    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:19.170571    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:19 GMT
	I0910 19:57:19.171296    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:19.664292    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:19.664376    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:19.664376    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:19.664376    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:19.667717    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:19.668060    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:19.668060    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:19.668060    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:19 GMT
	I0910 19:57:19.668060    8968 round_trippers.go:580]     Audit-Id: 7807eb53-0060-4480-af0f-a4c6dd506464
	I0910 19:57:19.668164    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:19.668164    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:19.668164    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:19.668496    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1664","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0910 19:57:19.669601    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:19.669693    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:19.669693    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:19.669693    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:19.672355    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:19.672982    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:19.672982    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:19 GMT
	I0910 19:57:19.672982    8968 round_trippers.go:580]     Audit-Id: ffa823b5-d983-458d-8007-dd7f36c9a720
	I0910 19:57:19.672982    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:19.672982    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:19.672982    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:19.672982    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:19.673228    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:20.160267    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:57:20.160267    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.160639    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.160639    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.164336    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:20.164414    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.164414    8968 round_trippers.go:580]     Audit-Id: 01415253-c07e-4681-b411-7ce2bf200b70
	I0910 19:57:20.164414    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.164504    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.164504    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.164504    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.164504    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.165337    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1803","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7046 chars]
	I0910 19:57:20.166491    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:20.166491    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.166491    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.166491    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.172394    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:57:20.172394    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.172394    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.172394    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.172394    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.172394    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.172394    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.172394    8968 round_trippers.go:580]     Audit-Id: fd3ff111-50f0-46a9-8c02-612911e99e43
	I0910 19:57:20.173340    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:20.173340    8968 pod_ready.go:93] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"True"
	I0910 19:57:20.173340    8968 pod_ready.go:82] duration metric: took 17.5212573s for pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.173340    8968 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.174024    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-629100
	I0910 19:57:20.174024    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.174024    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.174092    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.177005    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:20.177005    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.177005    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.177005    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.177005    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.177005    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.177005    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.177005    8968 round_trippers.go:580]     Audit-Id: d50cb178-a2bd-4292-b612-78040450e7b2
	I0910 19:57:20.177005    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-629100","namespace":"kube-system","uid":"2e6b0a6d-5705-427c-89e2-ff2b3ca25cb6","resourceVersion":"1766","creationTimestamp":"2024-09-10T19:56:47Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.215.172:2379","kubernetes.io/config.hash":"fe5e551808132ccc0e55277539c9741f","kubernetes.io/config.mirror":"fe5e551808132ccc0e55277539c9741f","kubernetes.io/config.seen":"2024-09-10T19:56:41.600229288Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:56:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6617 chars]
	I0910 19:57:20.178018    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:20.178747    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.178777    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.178891    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.181155    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:20.181155    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.181155    8968 round_trippers.go:580]     Audit-Id: 75b7c548-e5fc-4567-a30e-06fc802ce4f7
	I0910 19:57:20.181155    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.181155    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.181155    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.181155    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.181155    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.182015    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:20.182406    8968 pod_ready.go:93] pod "etcd-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:57:20.182470    8968 pod_ready.go:82] duration metric: took 9.0651ms for pod "etcd-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.182470    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.182536    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-629100
	I0910 19:57:20.182594    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.182594    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.182594    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.184992    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:20.184992    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.184992    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.184992    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.184992    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.184992    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.184992    8968 round_trippers.go:580]     Audit-Id: ecaf97db-b10e-47bf-9a0c-3731632e7426
	I0910 19:57:20.184992    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.185824    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-629100","namespace":"kube-system","uid":"5d262302-6bdf-4e67-bf9f-4b50d8f9fbd5","resourceVersion":"1763","creationTimestamp":"2024-09-10T19:56:47Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.215.172:8443","kubernetes.io/config.hash":"0a34f2a2bc072815a118e734662d14b6","kubernetes.io/config.mirror":"0a34f2a2bc072815a118e734662d14b6","kubernetes.io/config.seen":"2024-09-10T19:56:41.591483300Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:56:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8049 chars]
	I0910 19:57:20.185824    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:20.185824    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.185824    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.185824    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.189085    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:20.189085    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.189085    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.189085    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.189085    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.189085    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.189085    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.189085    8968 round_trippers.go:580]     Audit-Id: d8962aca-72f0-4df9-b96f-db48a0c013c4
	I0910 19:57:20.189641    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:20.189641    8968 pod_ready.go:93] pod "kube-apiserver-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:57:20.189641    8968 pod_ready.go:82] duration metric: took 7.1703ms for pod "kube-apiserver-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.189641    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.189641    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-629100
	I0910 19:57:20.189641    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.189641    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.189641    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.192216    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:20.192216    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.192216    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.192216    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.192216    8968 round_trippers.go:580]     Audit-Id: d8b87831-f303-4edb-b67f-b325bdc8bc73
	I0910 19:57:20.192216    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.192216    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.192216    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.193205    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-629100","namespace":"kube-system","uid":"60adcb0c-808a-477c-9432-83dc8f96d6c0","resourceVersion":"1770","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2a7614069b6cbf30e1abf0d1e34a8a71","kubernetes.io/config.mirror":"2a7614069b6cbf30e1abf0d1e34a8a71","kubernetes.io/config.seen":"2024-09-10T19:35:40.972009282Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0910 19:57:20.193205    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:20.193205    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.193205    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.193205    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.196188    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:20.196188    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.196188    8968 round_trippers.go:580]     Audit-Id: b8c6e9a1-5de8-43d4-aece-bfab104a8708
	I0910 19:57:20.196188    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.196188    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.196188    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.196188    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.196188    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.196396    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:20.196837    8968 pod_ready.go:93] pod "kube-controller-manager-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:57:20.196837    8968 pod_ready.go:82] duration metric: took 7.1955ms for pod "kube-controller-manager-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.196837    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4tzx6" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.196974    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tzx6
	I0910 19:57:20.196974    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.196974    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.196974    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.199389    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:20.199698    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.199724    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.199724    8968 round_trippers.go:580]     Audit-Id: 7b6102e3-a58f-4273-be13-bfe7f8612ec9
	I0910 19:57:20.199724    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.199724    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.199759    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.199759    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.199916    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4tzx6","generateName":"kube-proxy-","namespace":"kube-system","uid":"9bb18c28-3ee9-4028-a61d-3d7f6ea31894","resourceVersion":"1613","creationTimestamp":"2024-09-10T19:42:55Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:42:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6433 chars]
	I0910 19:57:20.200419    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 19:57:20.200419    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.200419    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.200419    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.202800    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:20.202849    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.202849    8968 round_trippers.go:580]     Audit-Id: 66d20dd7-4e80-4b48-9b6b-b7e41d8b1b3a
	I0910 19:57:20.202849    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.202849    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.202849    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.202849    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.202849    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.203082    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"39f4665c-e276-4967-a647-46b3a9a1c67a","resourceVersion":"1764","creationTimestamp":"2024-09-10T19:52:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_52_30_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4394 chars]
	I0910 19:57:20.203082    8968 pod_ready.go:98] node "multinode-629100-m03" hosting pod "kube-proxy-4tzx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100-m03" has status "Ready":"Unknown"
	I0910 19:57:20.203082    8968 pod_ready.go:82] duration metric: took 6.245ms for pod "kube-proxy-4tzx6" in "kube-system" namespace to be "Ready" ...
	E0910 19:57:20.203082    8968 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-629100-m03" hosting pod "kube-proxy-4tzx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100-m03" has status "Ready":"Unknown"
	I0910 19:57:20.203082    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qqrrg" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.362542    8968 request.go:632] Waited for 159.2048ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqrrg
	I0910 19:57:20.362692    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqrrg
	I0910 19:57:20.362692    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.362747    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.362774    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.366813    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:20.366813    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.366889    8968 round_trippers.go:580]     Audit-Id: cf198a53-b24f-4fc8-ac4b-f49346652a41
	I0910 19:57:20.366889    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.366889    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.366889    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.366889    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.366954    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.367975    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qqrrg","generateName":"kube-proxy-","namespace":"kube-system","uid":"1fc7fdda-d5e4-4c72-96c1-2348eb72b491","resourceVersion":"580","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0910 19:57:20.567891    8968 request.go:632] Waited for 198.8872ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:57:20.567891    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:57:20.567891    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.567891    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.567891    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.570555    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:20.571539    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.571567    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.571567    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.571567    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:20 GMT
	I0910 19:57:20.571567    8968 round_trippers.go:580]     Audit-Id: 06f30ea1-31b7-4eaa-b53e-b6b962fe11bd
	I0910 19:57:20.571567    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.571567    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.571911    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"a82f3bc4-899c-406b-b321-16365e535c5d","resourceVersion":"1311","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_38_34_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3819 chars]
	I0910 19:57:20.572510    8968 pod_ready.go:93] pod "kube-proxy-qqrrg" in "kube-system" namespace has status "Ready":"True"
	I0910 19:57:20.572510    8968 pod_ready.go:82] duration metric: took 369.4036ms for pod "kube-proxy-qqrrg" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.572607    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wqf2d" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.771909    8968 request.go:632] Waited for 199.2892ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wqf2d
	I0910 19:57:20.772211    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wqf2d
	I0910 19:57:20.772512    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.772512    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.772512    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.775824    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:20.776294    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.776379    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.776379    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.776379    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.776379    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:21 GMT
	I0910 19:57:20.776379    8968 round_trippers.go:580]     Audit-Id: 15c46eb1-3e13-49e7-bc7c-cb609a3758cf
	I0910 19:57:20.776379    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.776680    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wqf2d","generateName":"kube-proxy-","namespace":"kube-system","uid":"27e7846e-e506-48e0-96b9-351b3ca91703","resourceVersion":"1662","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6405 chars]
	I0910 19:57:20.975272    8968 request.go:632] Waited for 197.5848ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:20.975467    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:20.975467    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:20.975467    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:20.975467    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:20.979079    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:20.979079    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:20.979154    8968 round_trippers.go:580]     Audit-Id: 011253b1-ac7c-40b3-9f25-ea7150df599c
	I0910 19:57:20.979154    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:20.979154    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:20.979154    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:20.979154    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:20.979212    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:21 GMT
	I0910 19:57:20.979484    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:20.980118    8968 pod_ready.go:93] pod "kube-proxy-wqf2d" in "kube-system" namespace has status "Ready":"True"
	I0910 19:57:20.980118    8968 pod_ready.go:82] duration metric: took 407.4843ms for pod "kube-proxy-wqf2d" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:20.980118    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:21.162025    8968 request.go:632] Waited for 181.7709ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629100
	I0910 19:57:21.162199    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629100
	I0910 19:57:21.162199    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:21.162199    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:21.162199    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:21.167623    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:57:21.167623    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:21.167623    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:21.167623    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:21.167623    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:21 GMT
	I0910 19:57:21.167623    8968 round_trippers.go:580]     Audit-Id: 01323a6e-ef70-4169-af83-e48291e93d51
	I0910 19:57:21.167623    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:21.167623    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:21.168738    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-629100","namespace":"kube-system","uid":"fe93d1a3-98c4-44c9-b1fb-9d90a97b97c7","resourceVersion":"1757","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4f54b0774708cd0189f8ef861ea931f0","kubernetes.io/config.mirror":"4f54b0774708cd0189f8ef861ea931f0","kubernetes.io/config.seen":"2024-09-10T19:35:40.972010383Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0910 19:57:21.363555    8968 request.go:632] Waited for 194.041ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:21.363555    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:57:21.363555    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:21.363555    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:21.363555    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:21.367135    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:21.367135    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:21.367135    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:21.367135    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:21 GMT
	I0910 19:57:21.367135    8968 round_trippers.go:580]     Audit-Id: 4b7f7949-a52a-45af-a74b-3983206971f2
	I0910 19:57:21.367978    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:21.368079    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:21.368079    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:21.368513    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:57:21.368653    8968 pod_ready.go:93] pod "kube-scheduler-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:57:21.368653    8968 pod_ready.go:82] duration metric: took 388.5092ms for pod "kube-scheduler-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:57:21.368653    8968 pod_ready.go:39] duration metric: took 18.7277589s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:57:21.369216    8968 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:57:21.380454    8968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:57:21.404406    8968 command_runner.go:130] > 1954
	I0910 19:57:21.404406    8968 api_server.go:72] duration metric: took 30.5462048s to wait for apiserver process to appear ...
	I0910 19:57:21.404539    8968 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:57:21.404539    8968 api_server.go:253] Checking apiserver healthz at https://172.31.215.172:8443/healthz ...
	I0910 19:57:21.413739    8968 api_server.go:279] https://172.31.215.172:8443/healthz returned 200:
	ok
	I0910 19:57:21.414104    8968 round_trippers.go:463] GET https://172.31.215.172:8443/version
	I0910 19:57:21.414174    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:21.414174    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:21.414174    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:21.418082    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:21.418129    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:21.418129    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:21.418129    8968 round_trippers.go:580]     Content-Length: 263
	I0910 19:57:21.418129    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:21 GMT
	I0910 19:57:21.418291    8968 round_trippers.go:580]     Audit-Id: c1b3837d-54aa-45fc-8e3c-fba7b5325f45
	I0910 19:57:21.418291    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:21.418291    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:21.418328    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:21.418508    8968 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.0",
	  "gitCommit": "9edcffcde5595e8a5b1a35f88c421764e575afce",
	  "gitTreeState": "clean",
	  "buildDate": "2024-08-13T07:28:49Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0910 19:57:21.418508    8968 api_server.go:141] control plane version: v1.31.0
	I0910 19:57:21.418508    8968 api_server.go:131] duration metric: took 13.9686ms to wait for apiserver health ...
	I0910 19:57:21.418508    8968 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:57:21.569797    8968 request.go:632] Waited for 151.2785ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods
	I0910 19:57:21.569957    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods
	I0910 19:57:21.569957    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:21.569957    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:21.569957    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:21.574584    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:57:21.575592    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:21.575592    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:21 GMT
	I0910 19:57:21.575592    8968 round_trippers.go:580]     Audit-Id: f839a8a2-8f8c-4876-bcf1-73dd47b37852
	I0910 19:57:21.575592    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:21.575592    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:21.575592    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:21.575592    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:21.577629    8968 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1807"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1803","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89970 chars]
	I0910 19:57:21.581502    8968 system_pods.go:59] 12 kube-system pods found
	I0910 19:57:21.581502    8968 system_pods.go:61] "coredns-6f6b679f8f-srtv8" [76dd899a-75f4-497d-a6a9-6b263f3a379d] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "etcd-multinode-629100" [2e6b0a6d-5705-427c-89e2-ff2b3ca25cb6] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "kindnet-5crht" [d569a3a6-5b06-4adf-9ac0-294274923906] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "kindnet-6tdpv" [2c45f0f2-5d24-4ec2-8e6b-06923ea85e78] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "kindnet-lj2v2" [6643175a-32c5-4461-a441-08b9ac9ba98a] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "kube-apiserver-multinode-629100" [5d262302-6bdf-4e67-bf9f-4b50d8f9fbd5] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "kube-controller-manager-multinode-629100" [60adcb0c-808a-477c-9432-83dc8f96d6c0] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "kube-proxy-4tzx6" [9bb18c28-3ee9-4028-a61d-3d7f6ea31894] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "kube-proxy-qqrrg" [1fc7fdda-d5e4-4c72-96c1-2348eb72b491] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "kube-proxy-wqf2d" [27e7846e-e506-48e0-96b9-351b3ca91703] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "kube-scheduler-multinode-629100" [fe93d1a3-98c4-44c9-b1fb-9d90a97b97c7] Running
	I0910 19:57:21.581502    8968 system_pods.go:61] "storage-provisioner" [511e6e52-fc2a-4562-9903-42ee1f2e0a2d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0910 19:57:21.581502    8968 system_pods.go:74] duration metric: took 162.9834ms to wait for pod list to return data ...
	I0910 19:57:21.581502    8968 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:57:21.774841    8968 request.go:632] Waited for 193.3257ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/default/serviceaccounts
	I0910 19:57:21.775075    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/default/serviceaccounts
	I0910 19:57:21.775075    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:21.775075    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:21.775075    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:21.777662    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:57:21.778707    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:21.778707    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:21.778707    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:21.778707    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:21.778707    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:21.778707    8968 round_trippers.go:580]     Content-Length: 262
	I0910 19:57:21.778707    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:22 GMT
	I0910 19:57:21.778707    8968 round_trippers.go:580]     Audit-Id: 964d94c8-4b92-4126-b01b-0a91d4cabb7a
	I0910 19:57:21.778804    8968 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1807"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"5ec55b5c-25b1-4463-9e6c-90f1cae6d2f9","resourceVersion":"302","creationTimestamp":"2024-09-10T19:35:46Z"}}]}
	I0910 19:57:21.779276    8968 default_sa.go:45] found service account: "default"
	I0910 19:57:21.779276    8968 default_sa.go:55] duration metric: took 197.7608ms for default service account to be created ...
	I0910 19:57:21.779360    8968 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 19:57:21.962162    8968 request.go:632] Waited for 182.3324ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods
	I0910 19:57:21.962162    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods
	I0910 19:57:21.962162    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:21.962162    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:21.962162    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:21.966067    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:57:21.966067    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:21.966067    8968 round_trippers.go:580]     Audit-Id: 0423efcc-f10a-4002-9e37-e2a458baa7a9
	I0910 19:57:21.966067    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:21.966067    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:21.966067    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:21.966067    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:21.966067    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:22 GMT
	I0910 19:57:21.968299    8968 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1807"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1803","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89970 chars]
	I0910 19:57:21.974911    8968 system_pods.go:86] 12 kube-system pods found
	I0910 19:57:21.974911    8968 system_pods.go:89] "coredns-6f6b679f8f-srtv8" [76dd899a-75f4-497d-a6a9-6b263f3a379d] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "etcd-multinode-629100" [2e6b0a6d-5705-427c-89e2-ff2b3ca25cb6] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "kindnet-5crht" [d569a3a6-5b06-4adf-9ac0-294274923906] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "kindnet-6tdpv" [2c45f0f2-5d24-4ec2-8e6b-06923ea85e78] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "kindnet-lj2v2" [6643175a-32c5-4461-a441-08b9ac9ba98a] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "kube-apiserver-multinode-629100" [5d262302-6bdf-4e67-bf9f-4b50d8f9fbd5] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "kube-controller-manager-multinode-629100" [60adcb0c-808a-477c-9432-83dc8f96d6c0] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "kube-proxy-4tzx6" [9bb18c28-3ee9-4028-a61d-3d7f6ea31894] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "kube-proxy-qqrrg" [1fc7fdda-d5e4-4c72-96c1-2348eb72b491] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "kube-proxy-wqf2d" [27e7846e-e506-48e0-96b9-351b3ca91703] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "kube-scheduler-multinode-629100" [fe93d1a3-98c4-44c9-b1fb-9d90a97b97c7] Running
	I0910 19:57:21.974911    8968 system_pods.go:89] "storage-provisioner" [511e6e52-fc2a-4562-9903-42ee1f2e0a2d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0910 19:57:21.974911    8968 system_pods.go:126] duration metric: took 195.5383ms to wait for k8s-apps to be running ...
	I0910 19:57:21.974911    8968 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:57:21.985280    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:57:22.010423    8968 system_svc.go:56] duration metric: took 35.5093ms WaitForService to wait for kubelet
	I0910 19:57:22.010423    8968 kubeadm.go:582] duration metric: took 31.1521818s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:57:22.010423    8968 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:57:22.163093    8968 request.go:632] Waited for 151.9344ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes
	I0910 19:57:22.163173    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes
	I0910 19:57:22.163242    8968 round_trippers.go:469] Request Headers:
	I0910 19:57:22.163242    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:57:22.163242    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:57:22.169715    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:57:22.169715    8968 round_trippers.go:577] Response Headers:
	I0910 19:57:22.169715    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:57:22.169715    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:57:22.169715    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:57:22 GMT
	I0910 19:57:22.169715    8968 round_trippers.go:580]     Audit-Id: a98a44fe-1b51-46f9-ac76-62ddd76c5b45
	I0910 19:57:22.169715    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:57:22.169715    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:57:22.169715    8968 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1807"},"items":[{"metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15482 chars]
	I0910 19:57:22.171354    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:57:22.171411    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 19:57:22.171438    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:57:22.171438    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 19:57:22.171438    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:57:22.171438    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 19:57:22.171438    8968 node_conditions.go:105] duration metric: took 161.0048ms to run NodePressure ...
	I0910 19:57:22.171498    8968 start.go:241] waiting for startup goroutines ...
	I0910 19:57:22.171498    8968 start.go:246] waiting for cluster config update ...
	I0910 19:57:22.171498    8968 start.go:255] writing updated cluster config ...
	I0910 19:57:22.175344    8968 out.go:201] 
	I0910 19:57:22.178396    8968 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:57:22.191223    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:57:22.191223    8968 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json ...
	I0910 19:57:22.198746    8968 out.go:177] * Starting "multinode-629100-m02" worker node in "multinode-629100" cluster
	I0910 19:57:22.200146    8968 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 19:57:22.201144    8968 cache.go:56] Caching tarball of preloaded images
	I0910 19:57:22.201144    8968 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0910 19:57:22.201144    8968 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 19:57:22.201144    8968 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json ...
	I0910 19:57:22.202709    8968 start.go:360] acquireMachinesLock for multinode-629100-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 19:57:22.203735    8968 start.go:364] duration metric: took 1.0266ms to acquireMachinesLock for "multinode-629100-m02"
	I0910 19:57:22.203956    8968 start.go:96] Skipping create...Using existing machine configuration
	I0910 19:57:22.203956    8968 fix.go:54] fixHost starting: m02
	I0910 19:57:22.204124    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:57:24.077130    8968 main.go:141] libmachine: [stdout =====>] : Off
	
	I0910 19:57:24.077130    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:24.077130    8968 fix.go:112] recreateIfNeeded on multinode-629100-m02: state=Stopped err=<nil>
	W0910 19:57:24.077130    8968 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 19:57:24.082968    8968 out.go:177] * Restarting existing hyperv VM for "multinode-629100-m02" ...
	I0910 19:57:24.084630    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-629100-m02
	I0910 19:57:26.822921    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:57:26.822921    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:26.823025    8968 main.go:141] libmachine: Waiting for host to start...
	I0910 19:57:26.823060    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:57:28.791060    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:57:28.791132    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:28.791202    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:57:31.067546    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:57:31.067771    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:32.082153    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:57:34.021730    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:57:34.021730    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:34.022792    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:57:36.224642    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:57:36.224642    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:37.232461    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:57:39.147450    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:57:39.147532    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:39.147618    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:57:41.331839    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:57:41.331839    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:42.339860    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:57:44.316685    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:57:44.317302    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:44.317369    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:57:46.542833    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:57:46.542833    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:47.557305    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:57:49.513438    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:57:49.513750    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:49.513750    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:57:51.782750    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:57:51.782750    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:51.784941    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:57:53.687447    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:57:53.687447    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:53.688156    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:57:55.961266    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:57:55.961266    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:55.962278    8968 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json ...
	I0910 19:57:55.964406    8968 machine.go:93] provisionDockerMachine start ...
	I0910 19:57:55.964489    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:57:57.793217    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:57:57.793217    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:57:57.793856    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:00.045842    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:00.045842    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:00.049608    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:58:00.050121    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.34 22 <nil> <nil>}
	I0910 19:58:00.050121    8968 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 19:58:00.183996    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 19:58:00.184076    8968 buildroot.go:166] provisioning hostname "multinode-629100-m02"
	I0910 19:58:00.184153    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:02.087899    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:02.088894    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:02.089102    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:04.364340    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:04.364340    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:04.369814    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:58:04.370464    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.34 22 <nil> <nil>}
	I0910 19:58:04.370464    8968 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-629100-m02 && echo "multinode-629100-m02" | sudo tee /etc/hostname
	I0910 19:58:04.527401    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-629100-m02
	
	I0910 19:58:04.527589    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:06.390253    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:06.390253    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:06.391350    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:08.643044    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:08.643044    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:08.647016    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:58:08.647661    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.34 22 <nil> <nil>}
	I0910 19:58:08.647661    8968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-629100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-629100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-629100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 19:58:08.804525    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 19:58:08.804525    8968 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0910 19:58:08.804525    8968 buildroot.go:174] setting up certificates
	I0910 19:58:08.804525    8968 provision.go:84] configureAuth start
	I0910 19:58:08.804525    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:10.661240    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:10.661240    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:10.662128    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:12.884507    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:12.885091    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:12.885091    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:14.754237    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:14.754237    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:14.754237    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:16.951876    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:16.951957    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:16.951957    8968 provision.go:143] copyHostCerts
	I0910 19:58:16.952205    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0910 19:58:16.952509    8968 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0910 19:58:16.952509    8968 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0910 19:58:16.952982    8968 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0910 19:58:16.954244    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0910 19:58:16.954508    8968 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0910 19:58:16.954508    8968 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0910 19:58:16.954912    8968 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0910 19:58:16.956137    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0910 19:58:16.956397    8968 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0910 19:58:16.956465    8968 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0910 19:58:16.956769    8968 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0910 19:58:16.957536    8968 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-629100-m02 san=[127.0.0.1 172.31.210.34 localhost minikube multinode-629100-m02]
	I0910 19:58:17.028662    8968 provision.go:177] copyRemoteCerts
	I0910 19:58:17.036800    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 19:58:17.036800    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:18.910091    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:18.910329    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:18.910402    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:21.144201    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:21.144201    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:21.144567    8968 sshutil.go:53] new ssh client: &{IP:172.31.210.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\id_rsa Username:docker}
	I0910 19:58:21.254579    8968 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2175016s)
	I0910 19:58:21.254579    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0910 19:58:21.255584    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 19:58:21.300544    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0910 19:58:21.300947    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0910 19:58:21.344116    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0910 19:58:21.344116    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 19:58:21.383939    8968 provision.go:87] duration metric: took 12.5785846s to configureAuth
	I0910 19:58:21.383939    8968 buildroot.go:189] setting minikube options for container-runtime
	I0910 19:58:21.385078    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:58:21.385309    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:23.273912    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:23.274133    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:23.274216    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:25.499488    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:25.499488    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:25.503783    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:58:25.503993    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.34 22 <nil> <nil>}
	I0910 19:58:25.503993    8968 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 19:58:25.646704    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 19:58:25.646802    8968 buildroot.go:70] root file system type: tmpfs
	I0910 19:58:25.646924    8968 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 19:58:25.647033    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:27.480439    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:27.480736    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:27.480830    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:29.716955    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:29.716955    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:29.721049    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:58:29.721803    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.34 22 <nil> <nil>}
	I0910 19:58:29.721803    8968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.31.215.172"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 19:58:29.883342    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.31.215.172
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 19:58:29.883342    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:31.758273    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:31.758273    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:31.758273    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:33.990420    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:33.990420    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:33.994608    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:58:33.995058    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.34 22 <nil> <nil>}
	I0910 19:58:33.995058    8968 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 19:58:36.327414    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 19:58:36.327414    8968 machine.go:96] duration metric: took 40.3603513s to provisionDockerMachine
	I0910 19:58:36.327953    8968 start.go:293] postStartSetup for "multinode-629100-m02" (driver="hyperv")
	I0910 19:58:36.327989    8968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 19:58:36.337253    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 19:58:36.337253    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:38.192917    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:38.193589    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:38.193589    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:40.448270    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:40.448270    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:40.449281    8968 sshutil.go:53] new ssh client: &{IP:172.31.210.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\id_rsa Username:docker}
	I0910 19:58:40.560841    8968 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2233089s)
	I0910 19:58:40.574781    8968 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 19:58:40.581280    8968 command_runner.go:130] > NAME=Buildroot
	I0910 19:58:40.581390    8968 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0910 19:58:40.581390    8968 command_runner.go:130] > ID=buildroot
	I0910 19:58:40.581390    8968 command_runner.go:130] > VERSION_ID=2023.02.9
	I0910 19:58:40.581390    8968 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0910 19:58:40.581466    8968 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 19:58:40.581494    8968 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0910 19:58:40.581709    8968 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0910 19:58:40.581818    8968 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> 47242.pem in /etc/ssl/certs
	I0910 19:58:40.582347    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /etc/ssl/certs/47242.pem
	I0910 19:58:40.590537    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 19:58:40.609201    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /etc/ssl/certs/47242.pem (1708 bytes)
	I0910 19:58:40.654592    8968 start.go:296] duration metric: took 4.3263177s for postStartSetup
	I0910 19:58:40.654673    8968 fix.go:56] duration metric: took 1m18.4455612s for fixHost
	I0910 19:58:40.654673    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:42.517729    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:42.517729    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:42.517808    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:44.733070    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:44.733070    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:44.736859    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:58:44.737520    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.34 22 <nil> <nil>}
	I0910 19:58:44.737520    8968 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 19:58:44.866431    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725998325.065080015
	
	I0910 19:58:44.866494    8968 fix.go:216] guest clock: 1725998325.065080015
	I0910 19:58:44.866554    8968 fix.go:229] Guest: 2024-09-10 19:58:45.065080015 +0000 UTC Remote: 2024-09-10 19:58:40.6546731 +0000 UTC m=+229.363864501 (delta=4.410406915s)
	I0910 19:58:44.866616    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:46.705155    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:46.705155    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:46.705155    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:48.950966    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:48.950966    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:48.954845    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 19:58:48.955435    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.210.34 22 <nil> <nil>}
	I0910 19:58:48.955435    8968 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1725998324
	I0910 19:58:49.104380    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Sep 10 19:58:44 UTC 2024
	
	I0910 19:58:49.104380    8968 fix.go:236] clock set: Tue Sep 10 19:58:44 UTC 2024
	 (err=<nil>)
	I0910 19:58:49.104380    8968 start.go:83] releasing machines lock for "multinode-629100-m02", held for 1m26.8948577s
	I0910 19:58:49.104380    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:50.952170    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:50.952170    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:50.952493    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:53.188195    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:53.188195    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:53.191677    8968 out.go:177] * Found network options:
	I0910 19:58:53.194159    8968 out.go:177]   - NO_PROXY=172.31.215.172
	W0910 19:58:53.196662    8968 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 19:58:53.199651    8968 out.go:177]   - NO_PROXY=172.31.215.172
	W0910 19:58:53.202566    8968 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 19:58:53.204129    8968 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 19:58:53.205815    8968 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0910 19:58:53.206007    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:53.213927    8968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0910 19:58:53.213927    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:58:55.107993    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:55.108695    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:55.108830    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:55.130420    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:58:55.131430    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:55.131490    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:58:57.413249    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:57.413249    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:57.413558    8968 sshutil.go:53] new ssh client: &{IP:172.31.210.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\id_rsa Username:docker}
	I0910 19:58:57.436069    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:58:57.436069    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:58:57.436484    8968 sshutil.go:53] new ssh client: &{IP:172.31.210.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\id_rsa Username:docker}
	I0910 19:58:57.515254    8968 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0910 19:58:57.515353    8968 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.309149s)
	W0910 19:58:57.515444    8968 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0910 19:58:57.532343    8968 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0910 19:58:57.532856    8968 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.3186431s)
	W0910 19:58:57.532856    8968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 19:58:57.541585    8968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 19:58:57.567812    8968 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0910 19:58:57.567812    8968 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 19:58:57.568008    8968 start.go:495] detecting cgroup driver to use...
	I0910 19:58:57.568185    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 19:58:57.602759    8968 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0910 19:58:57.611976    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 19:58:57.637801    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0910 19:58:57.638881    8968 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0910 19:58:57.638881    8968 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0910 19:58:57.657829    8968 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 19:58:57.667103    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 19:58:57.694512    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 19:58:57.722238    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 19:58:57.749803    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 19:58:57.777151    8968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 19:58:57.804267    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 19:58:57.829298    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 19:58:57.856113    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 19:58:57.882607    8968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 19:58:57.898739    8968 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0910 19:58:57.906942    8968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 19:58:57.933108    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:58:58.096983    8968 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 19:58:58.124330    8968 start.go:495] detecting cgroup driver to use...
	I0910 19:58:58.133420    8968 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 19:58:58.154953    8968 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0910 19:58:58.155000    8968 command_runner.go:130] > [Unit]
	I0910 19:58:58.155000    8968 command_runner.go:130] > Description=Docker Application Container Engine
	I0910 19:58:58.155000    8968 command_runner.go:130] > Documentation=https://docs.docker.com
	I0910 19:58:58.155000    8968 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0910 19:58:58.155093    8968 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0910 19:58:58.155093    8968 command_runner.go:130] > StartLimitBurst=3
	I0910 19:58:58.155093    8968 command_runner.go:130] > StartLimitIntervalSec=60
	I0910 19:58:58.155170    8968 command_runner.go:130] > [Service]
	I0910 19:58:58.155170    8968 command_runner.go:130] > Type=notify
	I0910 19:58:58.155170    8968 command_runner.go:130] > Restart=on-failure
	I0910 19:58:58.155170    8968 command_runner.go:130] > Environment=NO_PROXY=172.31.215.172
	I0910 19:58:58.155170    8968 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0910 19:58:58.155170    8968 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0910 19:58:58.155170    8968 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0910 19:58:58.155253    8968 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0910 19:58:58.155253    8968 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0910 19:58:58.155253    8968 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0910 19:58:58.155253    8968 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0910 19:58:58.155253    8968 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0910 19:58:58.155337    8968 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0910 19:58:58.155337    8968 command_runner.go:130] > ExecStart=
	I0910 19:58:58.155337    8968 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0910 19:58:58.155337    8968 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0910 19:58:58.155410    8968 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0910 19:58:58.155410    8968 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0910 19:58:58.155410    8968 command_runner.go:130] > LimitNOFILE=infinity
	I0910 19:58:58.155410    8968 command_runner.go:130] > LimitNPROC=infinity
	I0910 19:58:58.155410    8968 command_runner.go:130] > LimitCORE=infinity
	I0910 19:58:58.155410    8968 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0910 19:58:58.155479    8968 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0910 19:58:58.155479    8968 command_runner.go:130] > TasksMax=infinity
	I0910 19:58:58.155479    8968 command_runner.go:130] > TimeoutStartSec=0
	I0910 19:58:58.155541    8968 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0910 19:58:58.155565    8968 command_runner.go:130] > Delegate=yes
	I0910 19:58:58.155565    8968 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0910 19:58:58.155594    8968 command_runner.go:130] > KillMode=process
	I0910 19:58:58.155594    8968 command_runner.go:130] > [Install]
	I0910 19:58:58.155594    8968 command_runner.go:130] > WantedBy=multi-user.target
	I0910 19:58:58.163062    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:58:58.193574    8968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 19:58:58.235402    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:58:58.265516    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 19:58:58.294415    8968 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 19:58:58.344693    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 19:58:58.368688    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 19:58:58.401898    8968 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0910 19:58:58.410974    8968 ssh_runner.go:195] Run: which cri-dockerd
	I0910 19:58:58.417197    8968 command_runner.go:130] > /usr/bin/cri-dockerd
	I0910 19:58:58.425577    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 19:58:58.442117    8968 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 19:58:58.479219    8968 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 19:58:58.670245    8968 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 19:58:58.838491    8968 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 19:58:58.838557    8968 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 19:58:58.880336    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:58:59.046519    8968 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 19:59:01.690700    8968 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6440044s)
	I0910 19:59:01.700354    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 19:59:01.734787    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 19:59:01.767934    8968 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 19:59:01.957340    8968 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 19:59:02.132422    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:59:02.305370    8968 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 19:59:02.344985    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 19:59:02.374215    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:59:02.553319    8968 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 19:59:02.648803    8968 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 19:59:02.657346    8968 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 19:59:02.665338    8968 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0910 19:59:02.665338    8968 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0910 19:59:02.665338    8968 command_runner.go:130] > Device: 0,22	Inode: 866         Links: 1
	I0910 19:59:02.665338    8968 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0910 19:59:02.665338    8968 command_runner.go:130] > Access: 2024-09-10 19:59:02.797986721 +0000
	I0910 19:59:02.665338    8968 command_runner.go:130] > Modify: 2024-09-10 19:59:02.797986721 +0000
	I0910 19:59:02.665338    8968 command_runner.go:130] > Change: 2024-09-10 19:59:02.800987138 +0000
	I0910 19:59:02.665338    8968 command_runner.go:130] >  Birth: -
	I0910 19:59:02.665338    8968 start.go:563] Will wait 60s for crictl version
	I0910 19:59:02.672339    8968 ssh_runner.go:195] Run: which crictl
	I0910 19:59:02.678477    8968 command_runner.go:130] > /usr/bin/crictl
	I0910 19:59:02.686231    8968 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 19:59:02.733057    8968 command_runner.go:130] > Version:  0.1.0
	I0910 19:59:02.733162    8968 command_runner.go:130] > RuntimeName:  docker
	I0910 19:59:02.733162    8968 command_runner.go:130] > RuntimeVersion:  27.2.0
	I0910 19:59:02.733162    8968 command_runner.go:130] > RuntimeApiVersion:  v1
	I0910 19:59:02.735131    8968 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0910 19:59:02.743347    8968 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 19:59:02.776022    8968 command_runner.go:130] > 27.2.0
	I0910 19:59:02.784928    8968 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 19:59:02.819061    8968 command_runner.go:130] > 27.2.0
	I0910 19:59:02.822643    8968 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0910 19:59:02.825640    8968 out.go:177]   - env NO_PROXY=172.31.215.172
	I0910 19:59:02.827666    8968 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0910 19:59:02.831639    8968 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0910 19:59:02.831639    8968 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0910 19:59:02.831639    8968 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0910 19:59:02.831639    8968 ip.go:211] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:36:e6 Flags:up|broadcast|multicast|running}
	I0910 19:59:02.833638    8968 ip.go:214] interface addr: fe80::bc85:cd58:cadb:2533/64
	I0910 19:59:02.833638    8968 ip.go:214] interface addr: 172.31.208.1/20
	I0910 19:59:02.841651    8968 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0910 19:59:02.847815    8968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:59:02.868243    8968 mustload.go:65] Loading cluster: multinode-629100
	I0910 19:59:02.869062    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:59:02.869656    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:59:04.719635    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:59:04.720073    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:04.720073    8968 host.go:66] Checking if "multinode-629100" exists ...
	I0910 19:59:04.720761    8968 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100 for IP: 172.31.210.34
	I0910 19:59:04.720761    8968 certs.go:194] generating shared ca certs ...
	I0910 19:59:04.720761    8968 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:59:04.721293    8968 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0910 19:59:04.721608    8968 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0910 19:59:04.721754    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 19:59:04.722043    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0910 19:59:04.722182    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 19:59:04.722327    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 19:59:04.722855    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem (1338 bytes)
	W0910 19:59:04.723133    8968 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724_empty.pem, impossibly tiny 0 bytes
	I0910 19:59:04.723271    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0910 19:59:04.723555    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0910 19:59:04.723890    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0910 19:59:04.723930    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0910 19:59:04.724462    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem (1708 bytes)
	I0910 19:59:04.724770    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /usr/share/ca-certificates/47242.pem
	I0910 19:59:04.724918    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:59:04.725065    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem -> /usr/share/ca-certificates/4724.pem
	I0910 19:59:04.725289    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 19:59:04.775335    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0910 19:59:04.818810    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 19:59:04.864575    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 19:59:04.911974    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /usr/share/ca-certificates/47242.pem (1708 bytes)
	I0910 19:59:04.965545    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 19:59:05.007719    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem --> /usr/share/ca-certificates/4724.pem (1338 bytes)
	I0910 19:59:05.065810    8968 ssh_runner.go:195] Run: openssl version
	I0910 19:59:05.074328    8968 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0910 19:59:05.086104    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4724.pem && ln -fs /usr/share/ca-certificates/4724.pem /etc/ssl/certs/4724.pem"
	I0910 19:59:05.116218    8968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4724.pem
	I0910 19:59:05.122357    8968 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 19:59:05.122457    8968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 19:59:05.131346    8968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4724.pem
	I0910 19:59:05.138968    8968 command_runner.go:130] > 51391683
	I0910 19:59:05.149143    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4724.pem /etc/ssl/certs/51391683.0"
	I0910 19:59:05.177207    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47242.pem && ln -fs /usr/share/ca-certificates/47242.pem /etc/ssl/certs/47242.pem"
	I0910 19:59:05.206663    8968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47242.pem
	I0910 19:59:05.213638    8968 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 19:59:05.213638    8968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 19:59:05.223372    8968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47242.pem
	I0910 19:59:05.232542    8968 command_runner.go:130] > 3ec20f2e
	I0910 19:59:05.241926    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47242.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 19:59:05.267436    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 19:59:05.301569    8968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:59:05.309084    8968 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:59:05.309084    8968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:59:05.317909    8968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:59:05.325840    8968 command_runner.go:130] > b5213941
	I0910 19:59:05.336138    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 19:59:05.364657    8968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 19:59:05.371821    8968 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 19:59:05.372349    8968 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 19:59:05.372589    8968 kubeadm.go:934] updating node {m02 172.31.210.34 8443 v1.31.0 docker false true} ...
	I0910 19:59:05.372759    8968 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-629100-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.210.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 19:59:05.381403    8968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 19:59:05.400696    8968 command_runner.go:130] > kubeadm
	I0910 19:59:05.400751    8968 command_runner.go:130] > kubectl
	I0910 19:59:05.400751    8968 command_runner.go:130] > kubelet
	I0910 19:59:05.400751    8968 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 19:59:05.411313    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0910 19:59:05.428417    8968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0910 19:59:05.456723    8968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 19:59:05.493514    8968 ssh_runner.go:195] Run: grep 172.31.215.172	control-plane.minikube.internal$ /etc/hosts
	I0910 19:59:05.499968    8968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.215.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:59:05.528089    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:59:05.708835    8968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:59:05.735452    8968 host.go:66] Checking if "multinode-629100" exists ...
	I0910 19:59:05.736454    8968 start.go:317] joinCluster: &{Name:multinode-629100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.215.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.210.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.210.110 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:59:05.736454    8968 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.31.210.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0910 19:59:05.736454    8968 host.go:66] Checking if "multinode-629100-m02" exists ...
	I0910 19:59:05.736454    8968 mustload.go:65] Loading cluster: multinode-629100
	I0910 19:59:05.737451    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:59:05.737451    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:59:07.666861    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:59:07.666861    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:07.666861    8968 host.go:66] Checking if "multinode-629100" exists ...
	I0910 19:59:07.667489    8968 api_server.go:166] Checking apiserver status ...
	I0910 19:59:07.675811    8968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:59:07.675811    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:59:09.577182    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:59:09.577182    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:09.577182    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:59:11.834950    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:59:11.835415    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:11.835760    8968 sshutil.go:53] new ssh client: &{IP:172.31.215.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 19:59:11.973428    8968 command_runner.go:130] > 1954
	I0910 19:59:11.973506    8968 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.297409s)
	I0910 19:59:11.981912    8968 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1954/cgroup
	W0910 19:59:11.998917    8968 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1954/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 19:59:12.010921    8968 ssh_runner.go:195] Run: ls
	I0910 19:59:12.017848    8968 api_server.go:253] Checking apiserver healthz at https://172.31.215.172:8443/healthz ...
	I0910 19:59:12.027154    8968 api_server.go:279] https://172.31.215.172:8443/healthz returned 200:
	ok
	I0910 19:59:12.036465    8968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl drain multinode-629100-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0910 19:59:12.212577    8968 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-5crht, kube-system/kube-proxy-qqrrg
	I0910 19:59:15.233281    8968 command_runner.go:130] > node/multinode-629100-m02 cordoned
	I0910 19:59:15.234104    8968 command_runner.go:130] > pod "busybox-7dff88458-7c4qt" has DeletionTimestamp older than 1 seconds, skipping
	I0910 19:59:15.234104    8968 command_runner.go:130] > node/multinode-629100-m02 drained
	I0910 19:59:15.234224    8968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl drain multinode-629100-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.1975453s)
	I0910 19:59:15.234330    8968 node.go:128] successfully drained node "multinode-629100-m02"
	I0910 19:59:15.234510    8968 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0910 19:59:15.234652    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:59:17.078768    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:59:17.078768    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:17.078768    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:59:19.300971    8968 main.go:141] libmachine: [stdout =====>] : 172.31.210.34
	
	I0910 19:59:19.301462    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:19.301756    8968 sshutil.go:53] new ssh client: &{IP:172.31.210.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\id_rsa Username:docker}
	I0910 19:59:19.716778    8968 command_runner.go:130] ! W0910 19:59:19.944102    1618 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0910 19:59:19.919951    8968 command_runner.go:130] ! W0910 19:59:20.147186    1618 cleanupnode.go:105] [reset] Failed to remove containers: failed to stop running pod 7fc605afb9d0eb24f26fc2e076eefe2380c407d03c61cca1840fcb37641ffa4f: rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod "busybox-7dff88458-7c4qt_default" network: cni config uninitialized
	I0910 19:59:19.937203    8968 command_runner.go:130] > [preflight] Running pre-flight checks
	I0910 19:59:19.937320    8968 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0910 19:59:19.937320    8968 command_runner.go:130] > [reset] Stopping the kubelet service
	I0910 19:59:19.937320    8968 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0910 19:59:19.937320    8968 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0910 19:59:19.937440    8968 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0910 19:59:19.937440    8968 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0910 19:59:19.937507    8968 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0910 19:59:19.937538    8968 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0910 19:59:19.937538    8968 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0910 19:59:19.937580    8968 command_runner.go:130] > to reset your system's IPVS tables.
	I0910 19:59:19.937601    8968 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0910 19:59:19.937601    8968 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0910 19:59:19.937601    8968 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (4.7027282s)
	I0910 19:59:19.937601    8968 node.go:155] successfully reset node "multinode-629100-m02"
	I0910 19:59:19.939296    8968 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 19:59:19.939463    8968 kapi.go:59] client config for multinode-629100: &rest.Config{Host:"https://172.31.215.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 19:59:19.940589    8968 cert_rotation.go:140] Starting client certificate rotation controller
	I0910 19:59:19.941110    8968 request.go:1351] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0910 19:59:19.941170    8968 round_trippers.go:463] DELETE https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:19.941170    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:19.941170    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:19.941170    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:19.941170    8968 round_trippers.go:473]     Content-Type: application/json
	I0910 19:59:19.961966    8968 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0910 19:59:19.961966    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:19.961966    8968 round_trippers.go:580]     Content-Length: 171
	I0910 19:59:19.961966    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:20 GMT
	I0910 19:59:19.961966    8968 round_trippers.go:580]     Audit-Id: 95943bc8-10fc-4acf-8db3-39c4acf08412
	I0910 19:59:19.961966    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:19.961966    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:19.961966    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:19.961966    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:19.961966    8968 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-629100-m02","kind":"nodes","uid":"a82f3bc4-899c-406b-b321-16365e535c5d"}}
	I0910 19:59:19.961966    8968 node.go:180] successfully deleted node "multinode-629100-m02"
	I0910 19:59:19.961966    8968 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.31.210.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0910 19:59:19.962994    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0910 19:59:19.962994    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:59:21.791387    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:59:21.791899    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:21.791899    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:59:24.053503    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 19:59:24.053593    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:24.053916    8968 sshutil.go:53] new ssh client: &{IP:172.31.215.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 19:59:24.220825    8968 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token tizv8b.w1fjagtp22n8yb3v --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b 
	I0910 19:59:24.220900    8968 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.2576208s)
	I0910 19:59:24.220986    8968 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.31.210.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0910 19:59:24.220986    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tizv8b.w1fjagtp22n8yb3v --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-629100-m02"
	I0910 19:59:24.395081    8968 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:59:25.722116    8968 command_runner.go:130] > [preflight] Running pre-flight checks
	I0910 19:59:25.722317    8968 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0910 19:59:25.722317    8968 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0910 19:59:25.722317    8968 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:59:25.722317    8968 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:59:25.722317    8968 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0910 19:59:25.722425    8968 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 19:59:25.722425    8968 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001927576s
	I0910 19:59:25.722425    8968 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0910 19:59:25.722425    8968 command_runner.go:130] > This node has joined the cluster:
	I0910 19:59:25.722425    8968 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0910 19:59:25.722425    8968 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0910 19:59:25.722425    8968 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0910 19:59:25.722724    8968 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tizv8b.w1fjagtp22n8yb3v --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-629100-m02": (1.5015967s)
	I0910 19:59:25.722724    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0910 19:59:25.923187    8968 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0910 19:59:26.098950    8968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-629100-m02 minikube.k8s.io/updated_at=2024_09_10T19_59_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=multinode-629100 minikube.k8s.io/primary=false
	I0910 19:59:26.217633    8968 command_runner.go:130] > node/multinode-629100-m02 labeled
	I0910 19:59:26.221028    8968 start.go:319] duration metric: took 20.4832075s to joinCluster
	I0910 19:59:26.221225    8968 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.31.210.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0910 19:59:26.222699    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:59:26.224788    8968 out.go:177] * Verifying Kubernetes components...
	I0910 19:59:26.235193    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:59:26.444869    8968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:59:26.471340    8968 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 19:59:26.472117    8968 kapi.go:59] client config for multinode-629100: &rest.Config{Host:"https://172.31.215.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 19:59:26.473026    8968 node_ready.go:35] waiting up to 6m0s for node "multinode-629100-m02" to be "Ready" ...
	I0910 19:59:26.473326    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:26.473388    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:26.473388    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:26.473388    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:26.476153    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:26.477214    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:26.477214    8968 round_trippers.go:580]     Audit-Id: b2b1f0e0-6dee-434e-a1a3-9948d4f5a701
	I0910 19:59:26.477214    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:26.477214    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:26.477214    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:26.477214    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:26.477214    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:26 GMT
	I0910 19:59:26.477381    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1947","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3559 chars]
	I0910 19:59:26.979410    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:26.979505    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:26.979572    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:26.979572    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:26.986618    8968 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 19:59:26.986618    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:26.986618    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:26.986618    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:26.986618    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:26.986618    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:27 GMT
	I0910 19:59:26.986618    8968 round_trippers.go:580]     Audit-Id: 41b91e8b-8422-413f-8bd8-e6412595a49d
	I0910 19:59:26.986618    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:26.986618    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1947","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3559 chars]
	I0910 19:59:27.482698    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:27.482782    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:27.482782    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:27.482782    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:27.486684    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:27.486684    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:27.486684    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:27 GMT
	I0910 19:59:27.486781    8968 round_trippers.go:580]     Audit-Id: 52689a91-309c-423c-8aa2-a9c71397e8ac
	I0910 19:59:27.486781    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:27.486781    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:27.486781    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:27.486781    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:27.486964    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1947","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3559 chars]
	I0910 19:59:27.974263    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:27.974263    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:27.974263    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:27.974263    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:27.978048    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:27.978048    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:27.978048    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:28 GMT
	I0910 19:59:27.978048    8968 round_trippers.go:580]     Audit-Id: 50acc0b1-5e33-4bc2-8da2-8ddd51782af0
	I0910 19:59:27.978048    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:27.978048    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:27.978048    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:27.978048    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:27.978048    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1947","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3559 chars]
	I0910 19:59:28.481599    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:28.481694    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:28.481694    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:28.481694    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:28.485258    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:28.485442    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:28.485442    8968 round_trippers.go:580]     Audit-Id: c1371e6a-e639-463c-ab14-5d1ca774b106
	I0910 19:59:28.485511    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:28.485511    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:28.485587    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:28.485635    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:28.485635    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:28 GMT
	I0910 19:59:28.485836    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1947","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3559 chars]
	I0910 19:59:28.485864    8968 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:59:28.973934    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:28.973996    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:28.973996    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:28.973996    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:28.977327    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:28.977327    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:28.977327    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:28.977327    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:29 GMT
	I0910 19:59:28.977327    8968 round_trippers.go:580]     Audit-Id: 5895e6b3-2911-4a19-881f-8f398429fbb8
	I0910 19:59:28.977905    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:28.977905    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:28.977905    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:28.978046    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1947","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3559 chars]
	I0910 19:59:29.475701    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:29.475853    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:29.475853    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:29.475853    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:29.478701    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:29.478701    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:29.479583    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:29.479583    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:29 GMT
	I0910 19:59:29.479583    8968 round_trippers.go:580]     Audit-Id: 7d9eeda5-40a2-4b9c-92b6-bfd3c992fe51
	I0910 19:59:29.479583    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:29.479583    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:29.479583    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:29.479583    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1947","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3559 chars]
	I0910 19:59:29.976235    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:29.976309    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:29.976309    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:29.976309    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:29.981395    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:59:29.981395    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:29.981934    8968 round_trippers.go:580]     Audit-Id: 41fadc76-e7e1-4428-9763-3e45ff103d89
	I0910 19:59:29.981934    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:29.981934    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:29.981934    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:29.981934    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:29.981934    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:30 GMT
	I0910 19:59:29.982111    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:30.477104    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:30.477104    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:30.477179    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:30.477179    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:30.480832    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:30.480995    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:30.480995    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:30.480995    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:30.480995    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:30.480995    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:30.480995    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:30 GMT
	I0910 19:59:30.480995    8968 round_trippers.go:580]     Audit-Id: 1a29d338-467a-410f-b324-cf037f217777
	I0910 19:59:30.481171    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:30.976941    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:30.976941    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:30.976941    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:30.976941    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:30.981847    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:59:30.982266    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:30.982266    8968 round_trippers.go:580]     Audit-Id: 253ef817-3848-4eb2-b1a8-2d1326140059
	I0910 19:59:30.982266    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:30.982266    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:30.982266    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:30.982266    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:30.982266    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:31 GMT
	I0910 19:59:30.982389    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:30.982779    8968 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:59:31.477831    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:31.477898    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:31.477898    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:31.477898    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:31.481068    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:31.481602    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:31.481602    8968 round_trippers.go:580]     Audit-Id: 4c59cdbf-2917-49f1-a32e-f38970176264
	I0910 19:59:31.481602    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:31.481602    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:31.481602    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:31.481602    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:31.481602    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:31 GMT
	I0910 19:59:31.481883    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:31.979081    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:31.979081    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:31.979081    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:31.979081    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:31.981876    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:31.981876    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:31.981876    8968 round_trippers.go:580]     Audit-Id: a2d8df23-b4bc-4050-b1dd-4915e936b8da
	I0910 19:59:31.981876    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:31.981876    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:31.982618    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:31.982618    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:31.982618    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:32 GMT
	I0910 19:59:31.982618    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:32.479369    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:32.479369    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:32.479369    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:32.479369    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:32.482975    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:32.482975    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:32.482975    8968 round_trippers.go:580]     Audit-Id: b8a16908-3904-460d-9d09-a7773f73ce65
	I0910 19:59:32.482975    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:32.482975    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:32.482975    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:32.482975    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:32.483257    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:32 GMT
	I0910 19:59:32.483340    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:32.977410    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:32.977410    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:32.977715    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:32.977715    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:32.982966    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:59:32.982966    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:32.982966    8968 round_trippers.go:580]     Audit-Id: 04d0aa2c-5a6e-4665-a588-6b330f083489
	I0910 19:59:32.982966    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:32.982966    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:32.982966    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:32.982966    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:32.982966    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:33 GMT
	I0910 19:59:32.983517    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:32.983917    8968 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:59:33.478678    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:33.478678    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:33.478678    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:33.478678    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:33.484099    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:59:33.484099    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:33.484099    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:33 GMT
	I0910 19:59:33.484099    8968 round_trippers.go:580]     Audit-Id: c1983ed6-54d4-45b7-848a-2712bcecd5cf
	I0910 19:59:33.484099    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:33.484099    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:33.484099    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:33.484245    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:33.484415    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:33.978665    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:33.978727    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:33.978727    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:33.978727    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:33.982189    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:33.982189    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:33.982189    8968 round_trippers.go:580]     Audit-Id: fa250dff-3b8c-4dac-9608-70e1ed500341
	I0910 19:59:33.982189    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:33.982189    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:33.982189    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:33.982189    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:33.982189    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:34 GMT
	I0910 19:59:33.982630    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:34.480076    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:34.480286    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:34.480286    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:34.480286    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:34.483783    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:34.483783    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:34.483783    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:34.483783    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:34.483783    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:34 GMT
	I0910 19:59:34.483783    8968 round_trippers.go:580]     Audit-Id: 1c56e164-0ea4-443f-b5c4-a14044386ce3
	I0910 19:59:34.483783    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:34.483783    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:34.484218    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:34.981099    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:34.981183    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:34.981264    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:34.981264    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:34.984941    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:34.985016    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:34.985016    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:35 GMT
	I0910 19:59:34.985016    8968 round_trippers.go:580]     Audit-Id: 46bb7b05-8d5b-4459-983d-41a4dade3324
	I0910 19:59:34.985016    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:34.985082    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:34.985082    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:34.985082    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:34.985351    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1972","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3668 chars]
	I0910 19:59:34.985997    8968 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:59:35.484822    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:35.484822    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:35.484822    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:35.484822    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:35.487724    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:35.488656    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:35.488656    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:35.488656    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:35 GMT
	I0910 19:59:35.488656    8968 round_trippers.go:580]     Audit-Id: 37aec342-1718-4374-a0d0-543d2d25d7d5
	I0910 19:59:35.488656    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:35.488656    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:35.488656    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:35.488656    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:35.988545    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:35.988545    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:35.988618    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:35.988618    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:35.991291    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:35.991800    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:35.991800    8968 round_trippers.go:580]     Audit-Id: 93060038-eacf-4b30-a595-c00ae1ffe533
	I0910 19:59:35.991859    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:35.991859    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:35.991859    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:35.991859    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:35.991859    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:36 GMT
	I0910 19:59:35.991926    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:36.489235    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:36.489235    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:36.489235    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:36.489235    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:36.492950    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:36.493345    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:36.493345    8968 round_trippers.go:580]     Audit-Id: c23a9b68-9ed6-471a-9602-0f5d414c330c
	I0910 19:59:36.493345    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:36.493345    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:36.493345    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:36.493345    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:36.493345    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:36 GMT
	I0910 19:59:36.493345    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:36.988810    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:36.988923    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:36.988923    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:36.988923    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:36.993977    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:59:36.993977    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:36.994071    8968 round_trippers.go:580]     Audit-Id: 40708c38-0382-4cce-8ff8-5be3919c531f
	I0910 19:59:36.994071    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:36.994071    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:36.994071    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:36.994071    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:36.994071    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:37 GMT
	I0910 19:59:36.994357    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:36.995014    8968 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:59:37.490334    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:37.490334    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:37.490410    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:37.490410    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:37.494556    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:59:37.495301    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:37.495301    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:37.495301    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:37 GMT
	I0910 19:59:37.495301    8968 round_trippers.go:580]     Audit-Id: 9824b441-32fb-492a-a108-a02f9b577435
	I0910 19:59:37.495301    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:37.495301    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:37.495429    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:37.495619    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:37.976510    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:37.976589    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:37.976589    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:37.976666    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:37.982543    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:59:37.982543    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:37.982543    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:37.982543    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:37.982543    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:38 GMT
	I0910 19:59:37.982543    8968 round_trippers.go:580]     Audit-Id: f2233a22-321a-4a6f-9e07-193ab46cd78d
	I0910 19:59:37.982543    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:37.982543    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:37.983242    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:38.476132    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:38.476132    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:38.476249    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:38.476249    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:38.480770    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:59:38.481029    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:38.481029    8968 round_trippers.go:580]     Audit-Id: 362f4f44-720b-4a3f-b67b-58f44eb3d5b2
	I0910 19:59:38.481029    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:38.481029    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:38.481029    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:38.481029    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:38.481029    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:38 GMT
	I0910 19:59:38.481298    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:38.978179    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:38.978179    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:38.978258    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:38.978258    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:38.984809    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:59:38.984809    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:38.984809    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:38.984809    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:38.984809    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:38.984809    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:38.984809    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:39 GMT
	I0910 19:59:38.984809    8968 round_trippers.go:580]     Audit-Id: 5c9bef8b-eaee-4f1f-8e0f-469840c7699d
	I0910 19:59:38.984809    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:39.476667    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:39.476740    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:39.476740    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:39.476740    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:39.480411    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:39.480761    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:39.480761    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:39.480761    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:39.480761    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:39.480852    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:39.480852    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:39 GMT
	I0910 19:59:39.480852    8968 round_trippers.go:580]     Audit-Id: 204e5a9e-ed7d-433c-a657-47c4421aad1f
	I0910 19:59:39.481168    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:39.481925    8968 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:59:39.977492    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:39.977492    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:39.977492    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:39.977492    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:39.982755    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:59:39.982755    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:39.982755    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:39.982755    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:39.982755    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:40 GMT
	I0910 19:59:39.982755    8968 round_trippers.go:580]     Audit-Id: 89774f9d-a748-4735-8621-26ff0e497d18
	I0910 19:59:39.982755    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:39.982755    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:39.983056    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:40.475411    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:40.475411    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:40.475411    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:40.475411    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:40.478982    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:40.478982    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:40.478982    8968 round_trippers.go:580]     Audit-Id: c55e79f5-bb7c-46fe-a7aa-d1911499412b
	I0910 19:59:40.478982    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:40.479487    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:40.479487    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:40.479487    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:40.479487    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:40 GMT
	I0910 19:59:40.479753    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:40.988926    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:40.988926    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:40.989000    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:40.989000    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:40.992680    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:40.992680    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:40.992680    8968 round_trippers.go:580]     Audit-Id: a8eb34e3-12c2-4c37-9ca3-f24a50cd7ac7
	I0910 19:59:40.992680    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:40.992788    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:40.992788    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:40.992788    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:40.992788    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:41 GMT
	I0910 19:59:40.992980    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:41.487748    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:41.487818    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:41.487886    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:41.487886    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:41.492278    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:59:41.492311    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:41.492311    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:41.492311    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:41.492311    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:41.492311    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:41.492311    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:41 GMT
	I0910 19:59:41.492311    8968 round_trippers.go:580]     Audit-Id: 5e01bc85-f1a1-466c-90d4-a4363185bf95
	I0910 19:59:41.492311    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:41.493152    8968 node_ready.go:53] node "multinode-629100-m02" has status "Ready":"False"
	I0910 19:59:41.989732    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:41.989732    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:41.989732    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:41.989732    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:41.993382    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:41.993382    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:41.993382    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:41.993382    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:42 GMT
	I0910 19:59:41.993382    8968 round_trippers.go:580]     Audit-Id: b0832d61-c72f-4604-9d9e-3d263876acf3
	I0910 19:59:41.993382    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:41.993382    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:41.993382    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:41.994436    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:42.477425    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:42.477511    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:42.477511    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:42.477511    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:42.481375    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:42.481483    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:42.481560    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:42.481560    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:42.481560    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:42.481560    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:42.481560    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:42 GMT
	I0910 19:59:42.481704    8968 round_trippers.go:580]     Audit-Id: 01b7da05-56f8-4e9b-bf0a-3155f1c8b118
	I0910 19:59:42.481865    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1978","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4060 chars]
	I0910 19:59:42.983095    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:42.983328    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:42.983328    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:42.983409    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:42.989332    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:59:42.989332    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:42.989332    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:42.989332    8968 round_trippers.go:580]     Audit-Id: 32a887cf-d6d4-4ba6-8075-c4e5fb15cfbb
	I0910 19:59:42.989332    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:42.989332    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:42.989332    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:42.989332    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:42.989332    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1990","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3926 chars]
	I0910 19:59:42.990398    8968 node_ready.go:49] node "multinode-629100-m02" has status "Ready":"True"
	I0910 19:59:42.990491    8968 node_ready.go:38] duration metric: took 16.5161989s for node "multinode-629100-m02" to be "Ready" ...
	I0910 19:59:42.990491    8968 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:59:42.990617    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods
	I0910 19:59:42.990667    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:42.990695    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:42.990695    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:42.994112    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:42.994112    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:42.994112    8968 round_trippers.go:580]     Audit-Id: 5ae8eeaa-277c-461e-b272-0261c43edfa3
	I0910 19:59:42.994112    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:42.994112    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:42.994112    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:42.994112    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:42.994112    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:42.995111    8968 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1992"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1803","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89569 chars]
	I0910 19:59:42.999566    8968 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:42.999566    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 19:59:42.999566    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:42.999566    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:42.999566    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.002358    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:43.002358    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.002358    8968 round_trippers.go:580]     Audit-Id: d909d94a-4b5d-417c-a1d2-79b95ff612f6
	I0910 19:59:43.002358    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.002358    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.002358    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.002358    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.002358    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.002992    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1803","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7046 chars]
	I0910 19:59:43.003660    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:59:43.003719    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.003719    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.003719    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.005677    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 19:59:43.005677    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.005677    8968 round_trippers.go:580]     Audit-Id: 3a82626a-8872-40f5-9e53-38f4605b11f2
	I0910 19:59:43.005677    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.005677    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.005677    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.005677    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.005677    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.006215    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:59:43.006610    8968 pod_ready.go:93] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"True"
	I0910 19:59:43.006675    8968 pod_ready.go:82] duration metric: took 7.108ms for pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.006675    8968 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.006800    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-629100
	I0910 19:59:43.006800    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.006800    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.006800    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.009146    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:43.009146    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.009146    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.009618    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.009618    8968 round_trippers.go:580]     Audit-Id: 4be1e62c-ded0-48a7-a637-ff0c9a6d7897
	I0910 19:59:43.009618    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.009618    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.009618    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.009710    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-629100","namespace":"kube-system","uid":"2e6b0a6d-5705-427c-89e2-ff2b3ca25cb6","resourceVersion":"1766","creationTimestamp":"2024-09-10T19:56:47Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.215.172:2379","kubernetes.io/config.hash":"fe5e551808132ccc0e55277539c9741f","kubernetes.io/config.mirror":"fe5e551808132ccc0e55277539c9741f","kubernetes.io/config.seen":"2024-09-10T19:56:41.600229288Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:56:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6617 chars]
	I0910 19:59:43.010257    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:59:43.010257    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.010257    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.010257    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.013034    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:43.013359    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.013359    8968 round_trippers.go:580]     Audit-Id: ba612ca7-9f21-4535-9350-a8557ffb79c6
	I0910 19:59:43.013359    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.013410    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.013410    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.013410    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.013410    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.013588    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:59:43.014116    8968 pod_ready.go:93] pod "etcd-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:59:43.014150    8968 pod_ready.go:82] duration metric: took 7.475ms for pod "etcd-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.014150    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.014296    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-629100
	I0910 19:59:43.014296    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.014296    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.014334    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.018506    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:59:43.018506    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.018506    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.018506    8968 round_trippers.go:580]     Audit-Id: b4dff58b-0826-4660-90ce-50af642158e9
	I0910 19:59:43.018506    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.018506    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.018506    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.018506    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.019079    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-629100","namespace":"kube-system","uid":"5d262302-6bdf-4e67-bf9f-4b50d8f9fbd5","resourceVersion":"1763","creationTimestamp":"2024-09-10T19:56:47Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.215.172:8443","kubernetes.io/config.hash":"0a34f2a2bc072815a118e734662d14b6","kubernetes.io/config.mirror":"0a34f2a2bc072815a118e734662d14b6","kubernetes.io/config.seen":"2024-09-10T19:56:41.591483300Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:56:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8049 chars]
	I0910 19:59:43.019161    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:59:43.019161    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.019161    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.019161    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.021735    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 19:59:43.021735    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.021735    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.021735    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.021735    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.021735    8968 round_trippers.go:580]     Audit-Id: 5fcceb6e-c937-4989-afab-265cf1706208
	I0910 19:59:43.021735    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.021735    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.021735    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:59:43.021735    8968 pod_ready.go:93] pod "kube-apiserver-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:59:43.022721    8968 pod_ready.go:82] duration metric: took 8.5319ms for pod "kube-apiserver-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.022721    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.022721    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-629100
	I0910 19:59:43.022721    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.022721    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.022721    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.024586    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 19:59:43.024586    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.024586    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.024586    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.024586    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.024586    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.024586    8968 round_trippers.go:580]     Audit-Id: 1cf28b16-5eac-4f1e-b395-ab20eb169c43
	I0910 19:59:43.024586    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.025583    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-629100","namespace":"kube-system","uid":"60adcb0c-808a-477c-9432-83dc8f96d6c0","resourceVersion":"1770","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2a7614069b6cbf30e1abf0d1e34a8a71","kubernetes.io/config.mirror":"2a7614069b6cbf30e1abf0d1e34a8a71","kubernetes.io/config.seen":"2024-09-10T19:35:40.972009282Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0910 19:59:43.025583    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:59:43.025583    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.025583    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.025583    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.027388    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 19:59:43.027388    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.027388    8968 round_trippers.go:580]     Audit-Id: 44f7ab70-07ed-4b84-ab82-ce9658869fe0
	I0910 19:59:43.027388    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.027388    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.027388    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.027388    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.027388    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.028452    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:59:43.028789    8968 pod_ready.go:93] pod "kube-controller-manager-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:59:43.028860    8968 pod_ready.go:82] duration metric: took 6.1386ms for pod "kube-controller-manager-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.028860    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4tzx6" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.186310    8968 request.go:632] Waited for 157.3754ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tzx6
	I0910 19:59:43.186480    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tzx6
	I0910 19:59:43.186597    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.186597    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.186597    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.192186    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 19:59:43.192186    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.192186    8968 round_trippers.go:580]     Audit-Id: 80b61aaa-08ed-4160-89a3-1a2b09b50d6f
	I0910 19:59:43.192186    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.192186    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.192186    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.192186    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.192186    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.192832    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4tzx6","generateName":"kube-proxy-","namespace":"kube-system","uid":"9bb18c28-3ee9-4028-a61d-3d7f6ea31894","resourceVersion":"1613","creationTimestamp":"2024-09-10T19:42:55Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:42:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6433 chars]
	I0910 19:59:43.390761    8968 request.go:632] Waited for 196.9981ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 19:59:43.390905    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 19:59:43.390905    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.390905    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.390905    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.397776    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:59:43.397794    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.397794    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.397794    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.397794    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.397794    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.397794    8968 round_trippers.go:580]     Audit-Id: 4cbe491b-4944-411f-817e-9395908ded40
	I0910 19:59:43.397794    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.397794    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"39f4665c-e276-4967-a647-46b3a9a1c67a","resourceVersion":"1764","creationTimestamp":"2024-09-10T19:52:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_52_30_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4394 chars]
	I0910 19:59:43.398745    8968 pod_ready.go:98] node "multinode-629100-m03" hosting pod "kube-proxy-4tzx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100-m03" has status "Ready":"Unknown"
	I0910 19:59:43.398745    8968 pod_ready.go:82] duration metric: took 369.8604ms for pod "kube-proxy-4tzx6" in "kube-system" namespace to be "Ready" ...
	E0910 19:59:43.398745    8968 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-629100-m03" hosting pod "kube-proxy-4tzx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-629100-m03" has status "Ready":"Unknown"
	I0910 19:59:43.398816    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qqrrg" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.593101    8968 request.go:632] Waited for 194.2722ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqrrg
	I0910 19:59:43.593420    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqrrg
	I0910 19:59:43.593420    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.593420    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.593508    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.597679    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:59:43.597679    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.597770    8968 round_trippers.go:580]     Audit-Id: a5d23962-b82a-48ae-b5cb-6940e1b3e384
	I0910 19:59:43.597770    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.597770    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.597770    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.597770    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.597770    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:43 GMT
	I0910 19:59:43.598242    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qqrrg","generateName":"kube-proxy-","namespace":"kube-system","uid":"1fc7fdda-d5e4-4c72-96c1-2348eb72b491","resourceVersion":"1960","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6203 chars]
	I0910 19:59:43.796728    8968 request.go:632] Waited for 197.0145ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:43.796799    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 19:59:43.796799    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.796799    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.796799    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.804054    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:59:43.804054    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.804054    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.804054    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.804054    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.804054    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:44 GMT
	I0910 19:59:43.804054    8968 round_trippers.go:580]     Audit-Id: e50fdf33-e254-48e0-89c7-c444dd7117c1
	I0910 19:59:43.804054    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.804054    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1990","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3926 chars]
	I0910 19:59:43.804054    8968 pod_ready.go:93] pod "kube-proxy-qqrrg" in "kube-system" namespace has status "Ready":"True"
	I0910 19:59:43.804054    8968 pod_ready.go:82] duration metric: took 405.211ms for pod "kube-proxy-qqrrg" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.805080    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wqf2d" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:43.983618    8968 request.go:632] Waited for 178.1884ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wqf2d
	I0910 19:59:43.983618    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wqf2d
	I0910 19:59:43.983823    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:43.983823    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:43.983922    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:43.988306    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 19:59:43.988306    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:43.988502    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:43.988502    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:43.988502    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:43.988502    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:43.988502    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:44 GMT
	I0910 19:59:43.988502    8968 round_trippers.go:580]     Audit-Id: 354a06b8-0595-4a89-a10c-d84a5fe4ac4b
	I0910 19:59:43.988677    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wqf2d","generateName":"kube-proxy-","namespace":"kube-system","uid":"27e7846e-e506-48e0-96b9-351b3ca91703","resourceVersion":"1662","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6405 chars]
	I0910 19:59:44.186776    8968 request.go:632] Waited for 197.2184ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:59:44.186884    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:59:44.186991    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:44.186991    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:44.186991    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:44.190426    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:44.191437    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:44.191437    8968 round_trippers.go:580]     Audit-Id: 0ec4ace3-0a37-4639-91ea-0624ff254aa1
	I0910 19:59:44.191501    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:44.191501    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:44.191501    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:44.191501    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:44.191501    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:44 GMT
	I0910 19:59:44.191697    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:59:44.192092    8968 pod_ready.go:93] pod "kube-proxy-wqf2d" in "kube-system" namespace has status "Ready":"True"
	I0910 19:59:44.192239    8968 pod_ready.go:82] duration metric: took 387.1334ms for pod "kube-proxy-wqf2d" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:44.192239    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:44.388952    8968 request.go:632] Waited for 196.625ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629100
	I0910 19:59:44.389119    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629100
	I0910 19:59:44.389119    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:44.389119    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:44.389119    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:44.395392    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 19:59:44.395554    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:44.395554    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:44.395554    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:44.395554    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:44 GMT
	I0910 19:59:44.395554    8968 round_trippers.go:580]     Audit-Id: a187e243-ade8-4eba-a622-4b87d1fae44d
	I0910 19:59:44.395554    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:44.395554    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:44.395554    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-629100","namespace":"kube-system","uid":"fe93d1a3-98c4-44c9-b1fb-9d90a97b97c7","resourceVersion":"1757","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4f54b0774708cd0189f8ef861ea931f0","kubernetes.io/config.mirror":"4f54b0774708cd0189f8ef861ea931f0","kubernetes.io/config.seen":"2024-09-10T19:35:40.972010383Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0910 19:59:44.593751    8968 request.go:632] Waited for 197.3713ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:59:44.594209    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 19:59:44.594209    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:44.594209    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:44.594448    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:44.598375    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:44.598544    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:44.598544    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:44.598544    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:44 GMT
	I0910 19:59:44.598544    8968 round_trippers.go:580]     Audit-Id: 19b06a75-46d9-4534-ab3a-9e47736a3a73
	I0910 19:59:44.598544    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:44.598544    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:44.598544    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:44.598544    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 19:59:44.599304    8968 pod_ready.go:93] pod "kube-scheduler-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 19:59:44.599304    8968 pod_ready.go:82] duration metric: took 407.0378ms for pod "kube-scheduler-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 19:59:44.599369    8968 pod_ready.go:39] duration metric: took 1.6087696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:59:44.599369    8968 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:59:44.608001    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:59:44.630296    8968 system_svc.go:56] duration metric: took 30.925ms WaitForService to wait for kubelet
	I0910 19:59:44.630296    8968 kubeadm.go:582] duration metric: took 18.4078396s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:59:44.630296    8968 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:59:44.797493    8968 request.go:632] Waited for 166.9512ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes
	I0910 19:59:44.797591    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes
	I0910 19:59:44.797591    8968 round_trippers.go:469] Request Headers:
	I0910 19:59:44.797681    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 19:59:44.797779    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 19:59:44.801072    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 19:59:44.801072    8968 round_trippers.go:577] Response Headers:
	I0910 19:59:44.801209    8968 round_trippers.go:580]     Audit-Id: b452cef8-1f82-4df9-8db5-e532bbc39302
	I0910 19:59:44.801209    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 19:59:44.801209    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 19:59:44.801209    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 19:59:44.801209    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 19:59:44.801209    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 19:59:45 GMT
	I0910 19:59:44.801754    8968 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1994"},"items":[{"metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15589 chars]
	I0910 19:59:44.803344    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:59:44.803427    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 19:59:44.803499    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:59:44.803499    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 19:59:44.803499    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:59:44.803499    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 19:59:44.803499    8968 node_conditions.go:105] duration metric: took 173.1915ms to run NodePressure ...
	I0910 19:59:44.803499    8968 start.go:241] waiting for startup goroutines ...
	I0910 19:59:44.803499    8968 start.go:255] writing updated cluster config ...
	I0910 19:59:44.807882    8968 out.go:201] 
	I0910 19:59:44.810694    8968 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:59:44.823639    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:59:44.823639    8968 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json ...
	I0910 19:59:44.828937    8968 out.go:177] * Starting "multinode-629100-m03" worker node in "multinode-629100" cluster
	I0910 19:59:44.833677    8968 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 19:59:44.833677    8968 cache.go:56] Caching tarball of preloaded images
	I0910 19:59:44.834202    8968 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0910 19:59:44.834202    8968 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 19:59:44.834202    8968 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json ...
	I0910 19:59:44.839736    8968 start.go:360] acquireMachinesLock for multinode-629100-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 19:59:44.839921    8968 start.go:364] duration metric: took 184.7µs to acquireMachinesLock for "multinode-629100-m03"
	I0910 19:59:44.839962    8968 start.go:96] Skipping create...Using existing machine configuration
	I0910 19:59:44.839962    8968 fix.go:54] fixHost starting: m03
	I0910 19:59:44.840595    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 19:59:46.691130    8968 main.go:141] libmachine: [stdout =====>] : Off
	
	I0910 19:59:46.691130    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:46.691506    8968 fix.go:112] recreateIfNeeded on multinode-629100-m03: state=Stopped err=<nil>
	W0910 19:59:46.691506    8968 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 19:59:46.693542    8968 out.go:177] * Restarting existing hyperv VM for "multinode-629100-m03" ...
	I0910 19:59:46.697859    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-629100-m03
	I0910 19:59:49.491873    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:59:49.492355    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:49.492355    8968 main.go:141] libmachine: Waiting for host to start...
	I0910 19:59:49.492355    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 19:59:51.472162    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:59:51.472421    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:51.472421    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 19:59:53.718573    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:59:53.718573    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:54.726797    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 19:59:56.621977    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:59:56.621977    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:56.621977    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 19:59:58.797657    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 19:59:58.798295    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:59:59.807897    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:01.780054    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:01.780054    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:01.780054    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:03.981888    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 20:00:03.981986    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:04.988356    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:06.950117    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:06.950117    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:06.950942    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:09.154880    8968 main.go:141] libmachine: [stdout =====>] : 
	I0910 20:00:09.154880    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:10.158703    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:12.145189    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:12.146199    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:12.146268    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:14.516827    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:14.516827    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:14.520303    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:16.425337    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:16.425337    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:16.426260    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:18.698335    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:18.698335    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:18.698831    8968 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100\config.json ...
	I0910 20:00:18.700918    8968 machine.go:93] provisionDockerMachine start ...
	I0910 20:00:18.701028    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:20.600667    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:20.600667    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:20.600771    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:22.848735    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:22.849124    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:22.853193    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 20:00:22.853290    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.214.220 22 <nil> <nil>}
	I0910 20:00:22.853823    8968 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 20:00:22.998775    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 20:00:22.998775    8968 buildroot.go:166] provisioning hostname "multinode-629100-m03"
	I0910 20:00:22.998775    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:24.917705    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:24.917705    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:24.918774    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:27.190971    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:27.191015    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:27.195681    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 20:00:27.196511    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.214.220 22 <nil> <nil>}
	I0910 20:00:27.196582    8968 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-629100-m03 && echo "multinode-629100-m03" | sudo tee /etc/hostname
	I0910 20:00:27.366054    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-629100-m03
	
	I0910 20:00:27.366114    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:29.303820    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:29.303820    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:29.303915    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:31.621807    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:31.622852    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:31.626757    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 20:00:31.627123    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.214.220 22 <nil> <nil>}
	I0910 20:00:31.627231    8968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-629100-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-629100-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-629100-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 20:00:31.773558    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 20:00:31.773558    8968 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0910 20:00:31.773558    8968 buildroot.go:174] setting up certificates
	I0910 20:00:31.773558    8968 provision.go:84] configureAuth start
	I0910 20:00:31.773558    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:33.622367    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:33.623009    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:33.623009    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:35.848539    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:35.848539    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:35.848834    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:37.736339    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:37.736339    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:37.736407    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:40.021628    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:40.022414    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:40.022414    8968 provision.go:143] copyHostCerts
	I0910 20:00:40.022584    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0910 20:00:40.023096    8968 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0910 20:00:40.023096    8968 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0910 20:00:40.023626    8968 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0910 20:00:40.024911    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0910 20:00:40.025224    8968 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0910 20:00:40.025224    8968 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0910 20:00:40.025616    8968 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0910 20:00:40.026931    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0910 20:00:40.027251    8968 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0910 20:00:40.027251    8968 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0910 20:00:40.027580    8968 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0910 20:00:40.028294    8968 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-629100-m03 san=[127.0.0.1 172.31.214.220 localhost minikube multinode-629100-m03]
	I0910 20:00:40.142265    8968 provision.go:177] copyRemoteCerts
	I0910 20:00:40.150325    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 20:00:40.150325    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:41.989235    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:41.989235    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:41.989326    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:44.196399    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:44.196399    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:44.196940    8968 sshutil.go:53] new ssh client: &{IP:172.31.214.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m03\id_rsa Username:docker}
	I0910 20:00:44.298062    8968 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1474571s)
	I0910 20:00:44.298153    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0910 20:00:44.298512    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0910 20:00:44.348354    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0910 20:00:44.348615    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 20:00:44.391852    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0910 20:00:44.392218    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 20:00:44.440352    8968 provision.go:87] duration metric: took 12.6659406s to configureAuth
	I0910 20:00:44.440433    8968 buildroot.go:189] setting minikube options for container-runtime
	I0910 20:00:44.441419    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 20:00:44.441545    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:46.315267    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:46.315267    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:46.316319    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:48.571380    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:48.572268    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:48.575707    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 20:00:48.576294    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.214.220 22 <nil> <nil>}
	I0910 20:00:48.576294    8968 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 20:00:48.702765    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 20:00:48.702765    8968 buildroot.go:70] root file system type: tmpfs
	I0910 20:00:48.702765    8968 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 20:00:48.703549    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:50.532870    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:50.532870    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:50.532870    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:52.747996    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:52.748706    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:52.752549    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 20:00:52.752549    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.214.220 22 <nil> <nil>}
	I0910 20:00:52.753106    8968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.31.215.172"
	Environment="NO_PROXY=172.31.215.172,172.31.210.34"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 20:00:52.908909    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.31.215.172
	Environment=NO_PROXY=172.31.215.172,172.31.210.34
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 20:00:52.908963    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:00:54.763022    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:00:54.763022    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:54.763331    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:00:56.979083    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:00:56.980033    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:00:56.983942    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 20:00:56.984101    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.214.220 22 <nil> <nil>}
	I0910 20:00:56.984101    8968 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 20:00:59.242727    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 20:00:59.243370    8968 machine.go:96] duration metric: took 40.5397199s to provisionDockerMachine
	I0910 20:00:59.243370    8968 start.go:293] postStartSetup for "multinode-629100-m03" (driver="hyperv")
	I0910 20:00:59.243370    8968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 20:00:59.254063    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 20:00:59.254063    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:01:01.100168    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:01.100168    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:01.100544    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:01:03.425307    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:01:03.425307    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:03.426112    8968 sshutil.go:53] new ssh client: &{IP:172.31.214.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m03\id_rsa Username:docker}
	I0910 20:01:03.538250    8968 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2838973s)
	I0910 20:01:03.551308    8968 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 20:01:03.557944    8968 command_runner.go:130] > NAME=Buildroot
	I0910 20:01:03.557944    8968 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0910 20:01:03.557944    8968 command_runner.go:130] > ID=buildroot
	I0910 20:01:03.557944    8968 command_runner.go:130] > VERSION_ID=2023.02.9
	I0910 20:01:03.557944    8968 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0910 20:01:03.557944    8968 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 20:01:03.557944    8968 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0910 20:01:03.558471    8968 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0910 20:01:03.559001    8968 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> 47242.pem in /etc/ssl/certs
	I0910 20:01:03.559119    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /etc/ssl/certs/47242.pem
	I0910 20:01:03.567667    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 20:01:03.586833    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /etc/ssl/certs/47242.pem (1708 bytes)
	I0910 20:01:03.632959    8968 start.go:296] duration metric: took 4.3892924s for postStartSetup
	I0910 20:01:03.632959    8968 fix.go:56] duration metric: took 1m18.787698s for fixHost
	I0910 20:01:03.632959    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:01:05.540885    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:05.541289    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:05.541289    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:01:07.834299    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:01:07.834299    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:07.838641    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 20:01:07.838641    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.214.220 22 <nil> <nil>}
	I0910 20:01:07.838641    8968 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 20:01:07.983174    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725998468.203656123
	
	I0910 20:01:07.983265    8968 fix.go:216] guest clock: 1725998468.203656123
	I0910 20:01:07.983265    8968 fix.go:229] Guest: 2024-09-10 20:01:08.203656123 +0000 UTC Remote: 2024-09-10 20:01:03.6329591 +0000 UTC m=+372.332579001 (delta=4.570697023s)
	I0910 20:01:07.983415    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:01:09.891034    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:09.891034    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:09.891893    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:01:12.207946    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:01:12.207946    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:12.211930    8968 main.go:141] libmachine: Using SSH client type: native
	I0910 20:01:12.212602    8968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1064900] 0x10674e0 <nil>  [] 0s} 172.31.214.220 22 <nil> <nil>}
	I0910 20:01:12.212602    8968 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1725998467
	I0910 20:01:12.363123    8968 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Sep 10 20:01:07 UTC 2024
	
	I0910 20:01:12.363123    8968 fix.go:236] clock set: Tue Sep 10 20:01:07 UTC 2024
	 (err=<nil>)
	I0910 20:01:12.363123    8968 start.go:83] releasing machines lock for "multinode-629100-m03", held for 1m27.5172728s
	I0910 20:01:12.363123    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:01:14.268870    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:14.269588    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:14.269588    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:01:16.565147    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:01:16.565147    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:16.567842    8968 out.go:177] * Found network options:
	I0910 20:01:16.570467    8968 out.go:177]   - NO_PROXY=172.31.215.172,172.31.210.34
	W0910 20:01:16.572933    8968 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 20:01:16.572933    8968 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 20:01:16.575423    8968 out.go:177]   - NO_PROXY=172.31.215.172,172.31.210.34
	W0910 20:01:16.577799    8968 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 20:01:16.577847    8968 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 20:01:16.578889    8968 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 20:01:16.578966    8968 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 20:01:16.581113    8968 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0910 20:01:16.581167    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:01:16.588473    8968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0910 20:01:16.588473    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:01:18.537486    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:18.537650    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:18.537650    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:01:18.560126    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:18.560126    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:18.560488    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:01:20.893827    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:01:20.893827    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:20.893827    8968 sshutil.go:53] new ssh client: &{IP:172.31.214.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m03\id_rsa Username:docker}
	I0910 20:01:20.947432    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:01:20.947514    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:20.947835    8968 sshutil.go:53] new ssh client: &{IP:172.31.214.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m03\id_rsa Username:docker}
	I0910 20:01:20.988217    8968 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0910 20:01:20.989433    8968 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.4079643s)
	W0910 20:01:20.989483    8968 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0910 20:01:21.039634    8968 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0910 20:01:21.040389    8968 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.4516151s)
	W0910 20:01:21.040389    8968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 20:01:21.050898    8968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 20:01:21.080176    8968 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0910 20:01:21.080251    8968 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 20:01:21.080251    8968 start.go:495] detecting cgroup driver to use...
	I0910 20:01:21.080433    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0910 20:01:21.112127    8968 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0910 20:01:21.112195    8968 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0910 20:01:21.120440    8968 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0910 20:01:21.128244    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 20:01:21.160340    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 20:01:21.179447    8968 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 20:01:21.190340    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 20:01:21.220345    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 20:01:21.251859    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 20:01:21.279243    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 20:01:21.308497    8968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 20:01:21.348781    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 20:01:21.379030    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 20:01:21.414108    8968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 20:01:21.444908    8968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 20:01:21.463534    8968 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0910 20:01:21.474116    8968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 20:01:21.504191    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 20:01:21.710944    8968 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 20:01:21.741156    8968 start.go:495] detecting cgroup driver to use...
	I0910 20:01:21.749419    8968 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 20:01:21.772677    8968 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0910 20:01:21.772677    8968 command_runner.go:130] > [Unit]
	I0910 20:01:21.772677    8968 command_runner.go:130] > Description=Docker Application Container Engine
	I0910 20:01:21.772677    8968 command_runner.go:130] > Documentation=https://docs.docker.com
	I0910 20:01:21.773219    8968 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0910 20:01:21.773219    8968 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0910 20:01:21.773219    8968 command_runner.go:130] > StartLimitBurst=3
	I0910 20:01:21.773219    8968 command_runner.go:130] > StartLimitIntervalSec=60
	I0910 20:01:21.773219    8968 command_runner.go:130] > [Service]
	I0910 20:01:21.773308    8968 command_runner.go:130] > Type=notify
	I0910 20:01:21.773339    8968 command_runner.go:130] > Restart=on-failure
	I0910 20:01:21.773339    8968 command_runner.go:130] > Environment=NO_PROXY=172.31.215.172
	I0910 20:01:21.773339    8968 command_runner.go:130] > Environment=NO_PROXY=172.31.215.172,172.31.210.34
	I0910 20:01:21.773339    8968 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0910 20:01:21.773429    8968 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0910 20:01:21.773429    8968 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0910 20:01:21.773429    8968 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0910 20:01:21.773429    8968 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0910 20:01:21.773517    8968 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0910 20:01:21.773578    8968 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0910 20:01:21.773578    8968 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0910 20:01:21.773578    8968 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0910 20:01:21.773578    8968 command_runner.go:130] > ExecStart=
	I0910 20:01:21.773578    8968 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0910 20:01:21.773696    8968 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0910 20:01:21.773696    8968 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0910 20:01:21.773696    8968 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0910 20:01:21.773737    8968 command_runner.go:130] > LimitNOFILE=infinity
	I0910 20:01:21.773737    8968 command_runner.go:130] > LimitNPROC=infinity
	I0910 20:01:21.773737    8968 command_runner.go:130] > LimitCORE=infinity
	I0910 20:01:21.773737    8968 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0910 20:01:21.773737    8968 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0910 20:01:21.773737    8968 command_runner.go:130] > TasksMax=infinity
	I0910 20:01:21.773737    8968 command_runner.go:130] > TimeoutStartSec=0
	I0910 20:01:21.773835    8968 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0910 20:01:21.773835    8968 command_runner.go:130] > Delegate=yes
	I0910 20:01:21.773835    8968 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0910 20:01:21.773835    8968 command_runner.go:130] > KillMode=process
	I0910 20:01:21.773915    8968 command_runner.go:130] > [Install]
	I0910 20:01:21.773915    8968 command_runner.go:130] > WantedBy=multi-user.target
	I0910 20:01:21.783017    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 20:01:21.814834    8968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 20:01:21.861533    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 20:01:21.895115    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 20:01:21.929737    8968 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 20:01:21.986989    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 20:01:22.009786    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 20:01:22.044338    8968 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0910 20:01:22.052447    8968 ssh_runner.go:195] Run: which cri-dockerd
	I0910 20:01:22.060048    8968 command_runner.go:130] > /usr/bin/cri-dockerd
	I0910 20:01:22.068564    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 20:01:22.087692    8968 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 20:01:22.130748    8968 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 20:01:22.326041    8968 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 20:01:22.510450    8968 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 20:01:22.510682    8968 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 20:01:22.552534    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 20:01:22.733732    8968 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 20:01:25.383537    8968 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.649626s)
	I0910 20:01:25.394801    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 20:01:25.428389    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 20:01:25.461351    8968 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 20:01:25.658622    8968 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 20:01:25.853479    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 20:01:26.039464    8968 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 20:01:26.080353    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 20:01:26.111225    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 20:01:26.301028    8968 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 20:01:26.405692    8968 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 20:01:26.416235    8968 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 20:01:26.425182    8968 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0910 20:01:26.425258    8968 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0910 20:01:26.425258    8968 command_runner.go:130] > Device: 0,22	Inode: 851         Links: 1
	I0910 20:01:26.425258    8968 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0910 20:01:26.425258    8968 command_runner.go:130] > Access: 2024-09-10 20:01:26.551452255 +0000
	I0910 20:01:26.425306    8968 command_runner.go:130] > Modify: 2024-09-10 20:01:26.551452255 +0000
	I0910 20:01:26.425306    8968 command_runner.go:130] > Change: 2024-09-10 20:01:26.555452503 +0000
	I0910 20:01:26.425306    8968 command_runner.go:130] >  Birth: -
	I0910 20:01:26.425373    8968 start.go:563] Will wait 60s for crictl version
	I0910 20:01:26.434411    8968 ssh_runner.go:195] Run: which crictl
	I0910 20:01:26.440547    8968 command_runner.go:130] > /usr/bin/crictl
	I0910 20:01:26.448761    8968 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 20:01:26.506672    8968 command_runner.go:130] > Version:  0.1.0
	I0910 20:01:26.506672    8968 command_runner.go:130] > RuntimeName:  docker
	I0910 20:01:26.506672    8968 command_runner.go:130] > RuntimeVersion:  27.2.0
	I0910 20:01:26.506672    8968 command_runner.go:130] > RuntimeApiVersion:  v1
	I0910 20:01:26.506672    8968 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0910 20:01:26.514848    8968 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 20:01:26.542515    8968 command_runner.go:130] > 27.2.0
	I0910 20:01:26.554585    8968 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 20:01:26.583005    8968 command_runner.go:130] > 27.2.0
	I0910 20:01:26.588672    8968 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0910 20:01:26.590691    8968 out.go:177]   - env NO_PROXY=172.31.215.172
	I0910 20:01:26.592929    8968 out.go:177]   - env NO_PROXY=172.31.215.172,172.31.210.34
	I0910 20:01:26.595533    8968 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0910 20:01:26.599058    8968 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0910 20:01:26.599058    8968 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0910 20:01:26.599058    8968 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0910 20:01:26.599058    8968 ip.go:211] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:36:e6 Flags:up|broadcast|multicast|running}
	I0910 20:01:26.601710    8968 ip.go:214] interface addr: fe80::bc85:cd58:cadb:2533/64
	I0910 20:01:26.601710    8968 ip.go:214] interface addr: 172.31.208.1/20
	I0910 20:01:26.611962    8968 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0910 20:01:26.618162    8968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 20:01:26.638790    8968 mustload.go:65] Loading cluster: multinode-629100
	I0910 20:01:26.639513    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 20:01:26.639951    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 20:01:28.542619    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:28.542619    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:28.542619    8968 host.go:66] Checking if "multinode-629100" exists ...
	I0910 20:01:28.544104    8968 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-629100 for IP: 172.31.214.220
	I0910 20:01:28.544104    8968 certs.go:194] generating shared ca certs ...
	I0910 20:01:28.544189    8968 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 20:01:28.544684    8968 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0910 20:01:28.544962    8968 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0910 20:01:28.545125    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 20:01:28.545366    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0910 20:01:28.545525    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 20:01:28.545664    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 20:01:28.546109    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem (1338 bytes)
	W0910 20:01:28.546373    8968 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724_empty.pem, impossibly tiny 0 bytes
	I0910 20:01:28.546556    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0910 20:01:28.546736    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0910 20:01:28.546958    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0910 20:01:28.546958    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0910 20:01:28.547594    8968 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem (1708 bytes)
	I0910 20:01:28.547874    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem -> /usr/share/ca-certificates/47242.pem
	I0910 20:01:28.548018    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 20:01:28.548147    8968 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem -> /usr/share/ca-certificates/4724.pem
	I0910 20:01:28.548374    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 20:01:28.595999    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0910 20:01:28.641867    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 20:01:28.692971    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 20:01:28.747797    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\47242.pem --> /usr/share/ca-certificates/47242.pem (1708 bytes)
	I0910 20:01:28.791833    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 20:01:28.837087    8968 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4724.pem --> /usr/share/ca-certificates/4724.pem (1338 bytes)
	I0910 20:01:28.903078    8968 ssh_runner.go:195] Run: openssl version
	I0910 20:01:28.911038    8968 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0910 20:01:28.919798    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 20:01:28.950357    8968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 20:01:28.958007    8968 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 20:01:28.958007    8968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0910 20:01:28.966909    8968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 20:01:28.975038    8968 command_runner.go:130] > b5213941
	I0910 20:01:28.983186    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 20:01:29.011274    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4724.pem && ln -fs /usr/share/ca-certificates/4724.pem /etc/ssl/certs/4724.pem"
	I0910 20:01:29.038463    8968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4724.pem
	I0910 20:01:29.045528    8968 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 20:01:29.046156    8968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:58 /usr/share/ca-certificates/4724.pem
	I0910 20:01:29.056089    8968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4724.pem
	I0910 20:01:29.071010    8968 command_runner.go:130] > 51391683
	I0910 20:01:29.079135    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4724.pem /etc/ssl/certs/51391683.0"
	I0910 20:01:29.106730    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47242.pem && ln -fs /usr/share/ca-certificates/47242.pem /etc/ssl/certs/47242.pem"
	I0910 20:01:29.134716    8968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47242.pem
	I0910 20:01:29.141563    8968 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 20:01:29.141563    8968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:58 /usr/share/ca-certificates/47242.pem
	I0910 20:01:29.150035    8968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47242.pem
	I0910 20:01:29.157622    8968 command_runner.go:130] > 3ec20f2e
	I0910 20:01:29.168696    8968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47242.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 20:01:29.194767    8968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 20:01:29.201113    8968 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 20:01:29.201602    8968 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 20:01:29.201832    8968 kubeadm.go:934] updating node {m03 172.31.214.220 0 v1.31.0  false true} ...
	I0910 20:01:29.201832    8968 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-629100-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.214.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 20:01:29.210168    8968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 20:01:29.228517    8968 command_runner.go:130] > kubeadm
	I0910 20:01:29.228713    8968 command_runner.go:130] > kubectl
	I0910 20:01:29.228713    8968 command_runner.go:130] > kubelet
	I0910 20:01:29.228713    8968 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 20:01:29.238279    8968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0910 20:01:29.255440    8968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0910 20:01:29.285564    8968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 20:01:29.324648    8968 ssh_runner.go:195] Run: grep 172.31.215.172	control-plane.minikube.internal$ /etc/hosts
	I0910 20:01:29.330462    8968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.215.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 20:01:29.360834    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 20:01:29.564245    8968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 20:01:29.590607    8968 host.go:66] Checking if "multinode-629100" exists ...
	I0910 20:01:29.590926    8968 start.go:317] joinCluster: &{Name:multinode-629100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-629100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.215.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.210.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.214.220 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 20:01:29.590926    8968 start.go:330] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:172.31.214.220 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0910 20:01:29.590926    8968 host.go:66] Checking if "multinode-629100-m03" exists ...
	I0910 20:01:29.591848    8968 mustload.go:65] Loading cluster: multinode-629100
	I0910 20:01:29.591848    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 20:01:29.592684    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 20:01:31.535953    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:31.535953    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:31.536167    8968 host.go:66] Checking if "multinode-629100" exists ...
	I0910 20:01:31.536976    8968 api_server.go:166] Checking apiserver status ...
	I0910 20:01:31.547683    8968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 20:01:31.547683    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 20:01:33.447093    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:33.447093    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:33.447093    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 20:01:35.683777    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 20:01:35.683777    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:35.684054    8968 sshutil.go:53] new ssh client: &{IP:172.31.215.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 20:01:35.798404    8968 command_runner.go:130] > 1954
	I0910 20:01:35.798457    8968 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.2504859s)
	I0910 20:01:35.807534    8968 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1954/cgroup
	W0910 20:01:35.824175    8968 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1954/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 20:01:35.833416    8968 ssh_runner.go:195] Run: ls
	I0910 20:01:35.839414    8968 api_server.go:253] Checking apiserver healthz at https://172.31.215.172:8443/healthz ...
	I0910 20:01:35.846929    8968 api_server.go:279] https://172.31.215.172:8443/healthz returned 200:
	ok
	I0910 20:01:35.855169    8968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl drain multinode-629100-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0910 20:01:36.012788    8968 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-6tdpv, kube-system/kube-proxy-4tzx6
	I0910 20:01:36.015537    8968 command_runner.go:130] > node/multinode-629100-m03 cordoned
	I0910 20:01:36.015537    8968 command_runner.go:130] > node/multinode-629100-m03 drained
	I0910 20:01:36.015537    8968 node.go:128] successfully drained node "multinode-629100-m03"
	I0910 20:01:36.015537    8968 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0910 20:01:36.015537    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 20:01:37.881650    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:37.881650    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:37.882660    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 20:01:40.126469    8968 main.go:141] libmachine: [stdout =====>] : 172.31.214.220
	
	I0910 20:01:40.127529    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:40.127529    8968 sshutil.go:53] new ssh client: &{IP:172.31.214.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m03\id_rsa Username:docker}
	I0910 20:01:40.525398    8968 command_runner.go:130] > [preflight] Running pre-flight checks
	I0910 20:01:40.525658    8968 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0910 20:01:40.526648    8968 command_runner.go:130] > [reset] Stopping the kubelet service
	I0910 20:01:40.541887    8968 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0910 20:01:40.700008    8968 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0910 20:01:40.714826    8968 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0910 20:01:40.714826    8968 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0910 20:01:40.714826    8968 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0910 20:01:40.715804    8968 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0910 20:01:40.715804    8968 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0910 20:01:40.715804    8968 command_runner.go:130] > to reset your system's IPVS tables.
	I0910 20:01:40.715804    8968 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0910 20:01:40.715804    8968 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0910 20:01:40.718024    8968 command_runner.go:130] ! W0910 20:01:40.751622    1580 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0910 20:01:40.718463    8968 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (4.702608s)
	I0910 20:01:40.718463    8968 node.go:155] successfully reset node "multinode-629100-m03"
	I0910 20:01:40.719519    8968 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 20:01:40.720091    8968 kapi.go:59] client config for multinode-629100: &rest.Config{Host:"https://172.31.215.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 20:01:40.720682    8968 request.go:1351] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0910 20:01:40.720682    8968 round_trippers.go:463] DELETE https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:40.720682    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:40.720682    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:40.720682    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:40.720682    8968 round_trippers.go:473]     Content-Type: application/json
	I0910 20:01:40.744773    8968 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0910 20:01:40.745228    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:40.745228    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:40.745228    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:40.745228    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:40.745228    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:40.745228    8968 round_trippers.go:580]     Content-Length: 171
	I0910 20:01:40.745228    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:40 GMT
	I0910 20:01:40.745228    8968 round_trippers.go:580]     Audit-Id: 0cce042d-7aa0-4e7b-8f8a-3110f38e7066
	I0910 20:01:40.745339    8968 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-629100-m03","kind":"nodes","uid":"39f4665c-e276-4967-a647-46b3a9a1c67a"}}
	I0910 20:01:40.745444    8968 node.go:180] successfully deleted node "multinode-629100-m03"
	I0910 20:01:40.745444    8968 start.go:334] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:172.31.214.220 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0910 20:01:40.745522    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0910 20:01:40.745522    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 20:01:42.576860    8968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 20:01:42.576860    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:42.576935    8968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 20:01:44.805167    8968 main.go:141] libmachine: [stdout =====>] : 172.31.215.172
	
	I0910 20:01:44.805756    8968 main.go:141] libmachine: [stderr =====>] : 
	I0910 20:01:44.805756    8968 sshutil.go:53] new ssh client: &{IP:172.31.215.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 20:01:44.981025    8968 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ojo1z0.33u67m7cn1zzquvz --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b 
	I0910 20:01:44.981166    8968 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.2352974s)
	I0910 20:01:44.981201    8968 start.go:343] trying to join worker node "m03" to cluster: &{Name:m03 IP:172.31.214.220 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0910 20:01:44.981277    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ojo1z0.33u67m7cn1zzquvz --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-629100-m03"
	I0910 20:01:45.147342    8968 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 20:01:46.490838    8968 command_runner.go:130] > [preflight] Running pre-flight checks
	I0910 20:01:46.490924    8968 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0910 20:01:46.490993    8968 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0910 20:01:46.491078    8968 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 20:01:46.491194    8968 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 20:01:46.491194    8968 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0910 20:01:46.491194    8968 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 20:01:46.491281    8968 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002263974s
	I0910 20:01:46.491308    8968 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0910 20:01:46.491372    8968 command_runner.go:130] > This node has joined the cluster:
	I0910 20:01:46.491431    8968 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0910 20:01:46.491431    8968 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0910 20:01:46.491532    8968 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0910 20:01:46.491532    8968 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ojo1z0.33u67m7cn1zzquvz --discovery-token-ca-cert-hash sha256:86983a252884c397496920203b77157bed381265c765ca295414c291ad070a4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-629100-m03": (1.5101533s)
	I0910 20:01:46.491532    8968 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0910 20:01:46.688776    8968 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0910 20:01:46.882906    8968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-629100-m03 minikube.k8s.io/updated_at=2024_09_10T20_01_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=multinode-629100 minikube.k8s.io/primary=false
	I0910 20:01:47.010780    8968 command_runner.go:130] > node/multinode-629100-m03 labeled
	I0910 20:01:47.011183    8968 start.go:319] duration metric: took 17.4190796s to joinCluster
	I0910 20:01:47.011334    8968 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.31.214.220 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0910 20:01:47.011471    8968 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 20:01:47.014855    8968 out.go:177] * Verifying Kubernetes components...
	I0910 20:01:47.026004    8968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 20:01:47.208180    8968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 20:01:47.232814    8968 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 20:01:47.233302    8968 kapi.go:59] client config for multinode-629100: &rest.Config{Host:"https://172.31.215.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-629100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2730080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 20:01:47.233975    8968 node_ready.go:35] waiting up to 6m0s for node "multinode-629100-m03" to be "Ready" ...
	I0910 20:01:47.233975    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:47.233975    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:47.233975    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:47.233975    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:47.238356    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:47.238424    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:47.238424    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:47.238424    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:47 GMT
	I0910 20:01:47.238424    8968 round_trippers.go:580]     Audit-Id: 018d3cc2-37ef-4ba0-80da-dc5fc6889d89
	I0910 20:01:47.238424    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:47.238424    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:47.238424    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:47.238680    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2143","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3391 chars]
	I0910 20:01:47.748289    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:47.748526    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:47.748526    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:47.748526    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:47.751896    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:01:47.751896    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:47.751896    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:47.751896    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:47 GMT
	I0910 20:01:47.751896    8968 round_trippers.go:580]     Audit-Id: e386fa60-5fe5-4216-8c5a-fe5e5d3c9cbf
	I0910 20:01:47.751896    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:47.751896    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:47.751896    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:47.752039    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2143","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3391 chars]
	I0910 20:01:48.248683    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:48.248683    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:48.248683    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:48.248683    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:48.261494    8968 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0910 20:01:48.261666    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:48.261666    8968 round_trippers.go:580]     Audit-Id: 3b4e6c02-4a3a-4e99-a8db-2554402d0034
	I0910 20:01:48.261666    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:48.261666    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:48.261666    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:48.261666    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:48.261756    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:48 GMT
	I0910 20:01:48.261905    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2143","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3391 chars]
	I0910 20:01:48.738987    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:48.739076    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:48.739076    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:48.739076    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:48.741534    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:01:48.741534    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:48.741534    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:48.741534    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:48.741534    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:48 GMT
	I0910 20:01:48.741534    8968 round_trippers.go:580]     Audit-Id: 7d9d8f1a-35aa-4d4c-a71b-63d80b3ecda6
	I0910 20:01:48.741534    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:48.741534    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:48.742197    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2143","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3391 chars]
	I0910 20:01:49.250080    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:49.250157    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:49.250157    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:49.250157    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:49.253352    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:49.253352    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:49.253352    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:49.253728    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:49 GMT
	I0910 20:01:49.253728    8968 round_trippers.go:580]     Audit-Id: 33840c83-4c48-4610-ad8d-14a280266c5b
	I0910 20:01:49.253728    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:49.253728    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:49.253728    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:49.253959    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2143","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3391 chars]
	I0910 20:01:49.254532    8968 node_ready.go:53] node "multinode-629100-m03" has status "Ready":"False"
	I0910 20:01:49.742775    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:49.742775    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:49.742775    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:49.742775    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:49.745778    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:49.745778    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:49.745778    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:49.745778    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:49.745778    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:49.745778    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:49.746092    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:49 GMT
	I0910 20:01:49.746092    8968 round_trippers.go:580]     Audit-Id: efd89449-f9df-4685-acf0-7a4f959f5155
	I0910 20:01:49.746187    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2143","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3391 chars]
	I0910 20:01:50.248140    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:50.248169    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:50.248211    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:50.248240    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:50.250593    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:01:50.250593    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:50.250593    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:50.251283    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:50.251283    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:50 GMT
	I0910 20:01:50.251283    8968 round_trippers.go:580]     Audit-Id: 1ff81fab-0d7d-481a-9f12-516979946b90
	I0910 20:01:50.251283    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:50.251283    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:50.251622    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:50.735945    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:50.735945    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:50.735945    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:50.735945    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:50.739471    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:01:50.739471    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:50.739471    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:50.739574    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:50.739596    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:50.739596    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:50 GMT
	I0910 20:01:50.739596    8968 round_trippers.go:580]     Audit-Id: 08280fbf-3cc0-419f-8a79-b38e1591672a
	I0910 20:01:50.739596    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:50.739848    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:51.237652    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:51.237652    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:51.237652    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:51.237652    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:51.244056    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 20:01:51.244056    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:51.244056    8968 round_trippers.go:580]     Audit-Id: 78353392-a695-4315-aa33-47f32baa3eea
	I0910 20:01:51.244056    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:51.244056    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:51.244056    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:51.244056    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:51.244056    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:51 GMT
	I0910 20:01:51.244056    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:51.735812    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:51.735812    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:51.735812    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:51.735812    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:51.740859    8968 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 20:01:51.741414    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:51.741414    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:51.741414    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:51.741414    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:51.741414    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:51 GMT
	I0910 20:01:51.741476    8968 round_trippers.go:580]     Audit-Id: 648c12d1-68ed-4dc6-857e-50e068702256
	I0910 20:01:51.741476    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:51.741521    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:51.741521    8968 node_ready.go:53] node "multinode-629100-m03" has status "Ready":"False"
	I0910 20:01:52.249622    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:52.249622    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:52.249622    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:52.249622    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:52.252861    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:52.252861    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:52.253361    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:52.253361    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:52.253361    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:52 GMT
	I0910 20:01:52.253361    8968 round_trippers.go:580]     Audit-Id: f8b7d6fa-7fd0-4c99-b793-64f6a89c69d9
	I0910 20:01:52.253361    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:52.253361    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:52.253529    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:52.748805    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:52.748917    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:52.748917    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:52.748917    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:52.752436    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:52.752436    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:52.752849    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:52.752849    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:52.752849    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:52.752849    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:52 GMT
	I0910 20:01:52.752849    8968 round_trippers.go:580]     Audit-Id: 3b9ac0a0-3d08-4b21-9d4d-6db3c581c780
	I0910 20:01:52.752849    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:52.753148    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:53.236502    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:53.236502    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:53.236502    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:53.236502    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:53.240071    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:53.240071    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:53.240071    8968 round_trippers.go:580]     Audit-Id: c70fc9c1-c9e2-4bba-93bd-adc759f71448
	I0910 20:01:53.240071    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:53.240071    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:53.240071    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:53.240071    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:53.240071    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:53 GMT
	I0910 20:01:53.240601    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:53.736048    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:53.736251    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:53.736251    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:53.736251    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:53.740105    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:53.740165    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:53.740165    8968 round_trippers.go:580]     Audit-Id: 495730d4-cb25-4d83-8ae7-3f9c567e0249
	I0910 20:01:53.740165    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:53.740165    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:53.740165    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:53.740165    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:53.740165    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:53 GMT
	I0910 20:01:53.740259    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:54.237440    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:54.237440    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:54.237440    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:54.237440    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:54.240804    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:54.240804    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:54.240804    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:54.240804    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:54.241423    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:54.241423    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:54 GMT
	I0910 20:01:54.241423    8968 round_trippers.go:580]     Audit-Id: c149fceb-b484-4349-ac0f-161f9f3f1c39
	I0910 20:01:54.241423    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:54.241682    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:54.241682    8968 node_ready.go:53] node "multinode-629100-m03" has status "Ready":"False"
	I0910 20:01:54.738048    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:54.738048    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:54.738134    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:54.738134    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:54.741525    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:54.742159    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:54.742159    8968 round_trippers.go:580]     Audit-Id: b27bed68-b101-4074-ba87-98f124603c28
	I0910 20:01:54.742159    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:54.742159    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:54.742159    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:54.742159    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:54.742270    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:54 GMT
	I0910 20:01:54.742333    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:55.235921    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:55.236129    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:55.236129    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:55.236129    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:55.238809    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:01:55.238809    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:55.238809    8968 round_trippers.go:580]     Audit-Id: f2e1709d-1df3-49a1-bdb5-ea1919869c04
	I0910 20:01:55.239779    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:55.239779    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:55.239779    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:55.239779    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:55.239779    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:55 GMT
	I0910 20:01:55.239813    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:55.737965    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:55.738035    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:55.738035    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:55.738035    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:55.742290    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 20:01:55.742469    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:55.742469    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:55 GMT
	I0910 20:01:55.742469    8968 round_trippers.go:580]     Audit-Id: 2efdaae7-ac52-4f4f-95fe-379b4b211af9
	I0910 20:01:55.742469    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:55.742469    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:55.742469    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:55.742554    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:55.742712    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:56.242501    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:56.242501    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:56.242501    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:56.242501    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:56.246924    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:56.247006    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:56.247006    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:56.247006    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:56 GMT
	I0910 20:01:56.247006    8968 round_trippers.go:580]     Audit-Id: f9da7e8c-bb9c-45de-9845-aebd2a06ba58
	I0910 20:01:56.247093    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:56.247093    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:56.247130    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:56.247158    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2159","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3500 chars]
	I0910 20:01:56.247847    8968 node_ready.go:53] node "multinode-629100-m03" has status "Ready":"False"
	I0910 20:01:56.742861    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:56.742940    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:56.742940    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:56.742940    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:56.746314    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:56.746314    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:56.746504    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:56 GMT
	I0910 20:01:56.746504    8968 round_trippers.go:580]     Audit-Id: 52b20db3-6927-4475-8b98-ef6f1cf2c16d
	I0910 20:01:56.746504    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:56.746504    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:56.746504    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:56.746504    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:56.746816    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2168","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3892 chars]
	I0910 20:01:57.243079    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:57.243186    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:57.243186    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:57.243186    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:57.246611    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:57.246611    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:57.246611    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:57 GMT
	I0910 20:01:57.246611    8968 round_trippers.go:580]     Audit-Id: 16c25b59-8f64-4843-b584-7513c881f1fb
	I0910 20:01:57.247301    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:57.247301    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:57.247301    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:57.247301    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:57.247473    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2168","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3892 chars]
	I0910 20:01:57.742934    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:57.743012    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:57.743012    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:57.743012    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:57.745895    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:01:57.745895    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:57.745895    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:57.745998    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:57.745998    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:57 GMT
	I0910 20:01:57.745998    8968 round_trippers.go:580]     Audit-Id: 3ab5b564-4a02-4089-bc02-91d4d2178d27
	I0910 20:01:57.745998    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:57.745998    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:57.746195    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2168","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3892 chars]
	I0910 20:01:58.240249    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:58.240249    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:58.240249    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:58.240249    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:58.243615    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:58.243615    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:58.243615    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:58.243615    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:58.243615    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:58.243615    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:58 GMT
	I0910 20:01:58.243615    8968 round_trippers.go:580]     Audit-Id: a7ad6bd5-1feb-4dca-b78c-61625a050af5
	I0910 20:01:58.243615    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:58.244542    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2168","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3892 chars]
	I0910 20:01:58.740841    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:58.741249    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:58.741249    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:58.741249    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:58.744698    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:01:58.744698    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:58.744698    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:58.744698    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:58 GMT
	I0910 20:01:58.744698    8968 round_trippers.go:580]     Audit-Id: 155da96f-b9e6-4378-ba75-09855b57b3a9
	I0910 20:01:58.744698    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:58.744698    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:58.744698    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:58.745278    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2168","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3892 chars]
	I0910 20:01:58.745596    8968 node_ready.go:53] node "multinode-629100-m03" has status "Ready":"False"
	I0910 20:01:59.244731    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:59.244731    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:59.244731    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:59.244731    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:59.247200    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:01:59.247200    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:59.247200    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:59.247200    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:59.247200    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:59.247200    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:59 GMT
	I0910 20:01:59.247200    8968 round_trippers.go:580]     Audit-Id: 6ce737b8-278d-4e2c-880f-82e136275b73
	I0910 20:01:59.247200    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:59.248287    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2168","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3892 chars]
	I0910 20:01:59.745064    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:01:59.745140    8968 round_trippers.go:469] Request Headers:
	I0910 20:01:59.745140    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:01:59.745216    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:01:59.747349    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:01:59.747349    8968 round_trippers.go:577] Response Headers:
	I0910 20:01:59.747349    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:01:59.747349    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:01:59.748273    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:01:59 GMT
	I0910 20:01:59.748273    8968 round_trippers.go:580]     Audit-Id: c3dc3c0b-1018-47c5-9775-2b3daf772078
	I0910 20:01:59.748273    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:01:59.748273    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:01:59.748394    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2168","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3892 chars]
	I0910 20:02:00.241361    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:02:00.241361    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:00.241361    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:00.241361    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:00.247392    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 20:02:00.247392    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:00.247392    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:00.247392    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:00.247392    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:00.247588    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:00.247588    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:00 GMT
	I0910 20:02:00.247588    8968 round_trippers.go:580]     Audit-Id: 69ff8858-1b70-49d5-b25f-f85eba5a23c8
	I0910 20:02:00.247758    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2168","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3892 chars]
	I0910 20:02:00.740360    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:02:00.740475    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:00.740475    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:00.740475    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:00.744216    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:02:00.745115    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:00.745115    8968 round_trippers.go:580]     Audit-Id: 63d29ffe-0799-4e53-bef5-32999527a116
	I0910 20:02:00.745115    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:00.745115    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:00.745115    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:00.745115    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:00.745115    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:00 GMT
	I0910 20:02:00.745347    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2168","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3892 chars]
	I0910 20:02:00.745825    8968 node_ready.go:53] node "multinode-629100-m03" has status "Ready":"False"
	I0910 20:02:01.237611    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:02:01.237611    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.237611    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.237611    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.241206    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:02:01.241206    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.241206    8968 round_trippers.go:580]     Audit-Id: bc862643-9a68-4ce5-991d-31ac9eba6974
	I0910 20:02:01.241206    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.241206    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.241206    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.241206    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.241206    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:01 GMT
	I0910 20:02:01.241737    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2168","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3892 chars]
	I0910 20:02:01.738080    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:02:01.738209    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.738209    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.738333    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.740988    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:02:01.740988    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.740988    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:01 GMT
	I0910 20:02:01.740988    8968 round_trippers.go:580]     Audit-Id: 860b8764-3677-4528-ae1b-3b460b3e99c2
	I0910 20:02:01.741745    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.741745    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.741785    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.741785    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.741899    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2176","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3758 chars]
	I0910 20:02:01.742427    8968 node_ready.go:49] node "multinode-629100-m03" has status "Ready":"True"
	I0910 20:02:01.742427    8968 node_ready.go:38] duration metric: took 14.5074711s for node "multinode-629100-m03" to be "Ready" ...
	I0910 20:02:01.742427    8968 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 20:02:01.742848    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods
	I0910 20:02:01.742936    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.742936    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.742936    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.747160    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 20:02:01.747160    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.747160    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.747160    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.747160    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:01 GMT
	I0910 20:02:01.747160    8968 round_trippers.go:580]     Audit-Id: 2c1a0558-5188-425a-a3f7-5addf35dda44
	I0910 20:02:01.747160    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.747160    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.748409    8968 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2176"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1803","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89119 chars]
	I0910 20:02:01.751409    8968 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:01.751409    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-srtv8
	I0910 20:02:01.751409    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.752430    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.752430    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.755100    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:02:01.755100    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.755100    8968 round_trippers.go:580]     Audit-Id: 40f981fd-15cc-4246-b583-f03ad3dc5598
	I0910 20:02:01.755100    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.755100    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.755100    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.755100    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.755100    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:01.755476    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-srtv8","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"76dd899a-75f4-497d-a6a9-6b263f3a379d","resourceVersion":"1803","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"bcd02568-572a-4496-84f9-a047a3f17e67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcd02568-572a-4496-84f9-a047a3f17e67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7046 chars]
	I0910 20:02:01.756097    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 20:02:01.756097    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.756176    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.756176    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.758725    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:02:01.758958    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.758958    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.758958    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.758958    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.758958    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.758958    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:01.758958    8968 round_trippers.go:580]     Audit-Id: 0b8db9d3-1caa-4aab-b233-c3dec45379f9
	I0910 20:02:01.758958    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 20:02:01.759580    8968 pod_ready.go:93] pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace has status "Ready":"True"
	I0910 20:02:01.759616    8968 pod_ready.go:82] duration metric: took 8.2061ms for pod "coredns-6f6b679f8f-srtv8" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:01.759616    8968 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:01.759742    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-629100
	I0910 20:02:01.759742    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.759742    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.759742    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.762019    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:02:01.762593    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.762593    8968 round_trippers.go:580]     Audit-Id: def92302-fb91-42e1-80bd-2835397395ac
	I0910 20:02:01.762593    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.762593    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.762593    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.762593    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.762593    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:01.762790    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-629100","namespace":"kube-system","uid":"2e6b0a6d-5705-427c-89e2-ff2b3ca25cb6","resourceVersion":"1766","creationTimestamp":"2024-09-10T19:56:47Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.215.172:2379","kubernetes.io/config.hash":"fe5e551808132ccc0e55277539c9741f","kubernetes.io/config.mirror":"fe5e551808132ccc0e55277539c9741f","kubernetes.io/config.seen":"2024-09-10T19:56:41.600229288Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:56:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6617 chars]
	I0910 20:02:01.763309    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 20:02:01.763309    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.763309    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.763309    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.765257    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 20:02:01.765257    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.765257    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.765257    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.765257    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.765257    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:01.765257    8968 round_trippers.go:580]     Audit-Id: 0eb77c63-feab-4ae7-adc5-707c1e7cdc87
	I0910 20:02:01.766515    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.766515    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 20:02:01.766515    8968 pod_ready.go:93] pod "etcd-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 20:02:01.767116    8968 pod_ready.go:82] duration metric: took 7.4997ms for pod "etcd-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:01.767116    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:01.767116    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-629100
	I0910 20:02:01.767116    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.767116    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.767116    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.769821    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 20:02:01.769821    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.769821    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.769821    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:01.769821    8968 round_trippers.go:580]     Audit-Id: a19d464e-9843-4e8f-a0c6-765d42f22d35
	I0910 20:02:01.769821    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.769821    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.769821    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.769821    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-629100","namespace":"kube-system","uid":"5d262302-6bdf-4e67-bf9f-4b50d8f9fbd5","resourceVersion":"1763","creationTimestamp":"2024-09-10T19:56:47Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.215.172:8443","kubernetes.io/config.hash":"0a34f2a2bc072815a118e734662d14b6","kubernetes.io/config.mirror":"0a34f2a2bc072815a118e734662d14b6","kubernetes.io/config.seen":"2024-09-10T19:56:41.591483300Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:56:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8049 chars]
	I0910 20:02:01.769821    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 20:02:01.769821    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.769821    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.769821    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.773796    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:02:01.774007    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.774007    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.774007    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.774007    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.774007    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:01.774007    8968 round_trippers.go:580]     Audit-Id: bef49463-76ca-49dd-9475-56afdbb80fbd
	I0910 20:02:01.774007    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.774007    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 20:02:01.774007    8968 pod_ready.go:93] pod "kube-apiserver-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 20:02:01.774007    8968 pod_ready.go:82] duration metric: took 6.8905ms for pod "kube-apiserver-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:01.774007    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:01.774549    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-629100
	I0910 20:02:01.774634    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.774634    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.774634    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.777275    8968 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 20:02:01.777275    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.777275    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.777275    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.777275    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:01.777275    8968 round_trippers.go:580]     Audit-Id: ad54ebea-f749-4d27-9cb9-621647aef2f3
	I0910 20:02:01.777275    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.777275    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.777592    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-629100","namespace":"kube-system","uid":"60adcb0c-808a-477c-9432-83dc8f96d6c0","resourceVersion":"1770","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2a7614069b6cbf30e1abf0d1e34a8a71","kubernetes.io/config.mirror":"2a7614069b6cbf30e1abf0d1e34a8a71","kubernetes.io/config.seen":"2024-09-10T19:35:40.972009282Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0910 20:02:01.778070    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 20:02:01.778131    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.778131    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.778131    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.780591    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:02:01.780693    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.780693    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.780693    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.780693    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.780693    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:01.780693    8968 round_trippers.go:580]     Audit-Id: f35b7e4c-b163-49b7-b34a-a98fbf9aac5e
	I0910 20:02:01.780693    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.780693    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 20:02:01.780693    8968 pod_ready.go:93] pod "kube-controller-manager-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 20:02:01.781226    8968 pod_ready.go:82] duration metric: took 7.2185ms for pod "kube-controller-manager-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:01.781226    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4tzx6" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:01.942249    8968 request.go:632] Waited for 160.6776ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tzx6
	I0910 20:02:01.942465    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tzx6
	I0910 20:02:01.942465    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:01.942465    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:01.942465    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:01.946031    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:02:01.946031    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:01.946031    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:01.946031    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:01.946031    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:01.946031    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:01.946031    8968 round_trippers.go:580]     Audit-Id: 533681df-f67e-4d0c-80f6-22391951fab4
	I0910 20:02:01.946031    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:01.946774    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4tzx6","generateName":"kube-proxy-","namespace":"kube-system","uid":"9bb18c28-3ee9-4028-a61d-3d7f6ea31894","resourceVersion":"2153","creationTimestamp":"2024-09-10T19:42:55Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:42:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6208 chars]
	I0910 20:02:02.145611    8968 request.go:632] Waited for 198.2393ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:02:02.145611    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m03
	I0910 20:02:02.145611    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:02.145611    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:02.145611    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:02.149496    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:02:02.149496    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:02.149496    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:02.149496    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:02.149997    8968 round_trippers.go:580]     Audit-Id: 93f9ef4e-60e0-40f9-9c7e-54022aaa5d48
	I0910 20:02:02.149997    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:02.149997    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:02.149997    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:02.150208    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m03","uid":"e0dd5475-f3a4-4792-b51e-69b1926c7e20","resourceVersion":"2176","creationTimestamp":"2024-09-10T20:01:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T20_01_46_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T20:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3758 chars]
	I0910 20:02:02.150586    8968 pod_ready.go:93] pod "kube-proxy-4tzx6" in "kube-system" namespace has status "Ready":"True"
	I0910 20:02:02.150586    8968 pod_ready.go:82] duration metric: took 369.2896ms for pod "kube-proxy-4tzx6" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:02.150586    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qqrrg" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:02.348481    8968 request.go:632] Waited for 197.6659ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqrrg
	I0910 20:02:02.348605    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqrrg
	I0910 20:02:02.348605    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:02.348672    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:02.348672    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:02.352250    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:02:02.352250    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:02.352250    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:02.352250    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:02.352250    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:02.352250    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:02.352250    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:02.352250    8968 round_trippers.go:580]     Audit-Id: e0b34baf-3f65-40ab-aa1c-925d9c07e495
	I0910 20:02:02.353298    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qqrrg","generateName":"kube-proxy-","namespace":"kube-system","uid":"1fc7fdda-d5e4-4c72-96c1-2348eb72b491","resourceVersion":"1960","creationTimestamp":"2024-09-10T19:38:33Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:38:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6203 chars]
	I0910 20:02:02.552537    8968 request.go:632] Waited for 198.711ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 20:02:02.552934    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100-m02
	I0910 20:02:02.552934    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:02.552934    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:02.552934    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:02.561546    8968 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 20:02:02.561546    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:02.561546    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:02.561546    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:02.561546    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:02.561546    8968 round_trippers.go:580]     Audit-Id: 5e6cbac9-d0a3-4af0-a707-eda87c5226fb
	I0910 20:02:02.561546    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:02.561546    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:02.562506    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100-m02","uid":"15bf5dcb-2d3b-4cb1-99f5-08f519f2a94f","resourceVersion":"1995","creationTimestamp":"2024-09-10T19:59:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_10T19_59_26_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:59:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3806 chars]
	I0910 20:02:02.562506    8968 pod_ready.go:93] pod "kube-proxy-qqrrg" in "kube-system" namespace has status "Ready":"True"
	I0910 20:02:02.562506    8968 pod_ready.go:82] duration metric: took 411.8917ms for pod "kube-proxy-qqrrg" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:02.562506    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wqf2d" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:02.739110    8968 request.go:632] Waited for 176.5917ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wqf2d
	I0910 20:02:02.739110    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wqf2d
	I0910 20:02:02.739521    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:02.739521    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:02.739521    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:02.742823    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:02:02.743304    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:02.743304    8968 round_trippers.go:580]     Audit-Id: c56fbd43-2371-4c0f-b765-606ae9adbea0
	I0910 20:02:02.743304    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:02.743304    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:02.743304    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:02.743304    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:02.743304    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:02 GMT
	I0910 20:02:02.743424    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wqf2d","generateName":"kube-proxy-","namespace":"kube-system","uid":"27e7846e-e506-48e0-96b9-351b3ca91703","resourceVersion":"1662","creationTimestamp":"2024-09-10T19:35:46Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"add27262-678e-46f7-96a7-42581f863a3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"add27262-678e-46f7-96a7-42581f863a3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6405 chars]
	I0910 20:02:02.943067    8968 request.go:632] Waited for 198.7271ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 20:02:02.943203    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 20:02:02.943250    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:02.943250    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:02.943250    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:02.949406    8968 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0910 20:02:02.949406    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:02.949406    8968 round_trippers.go:580]     Audit-Id: 017cd4a9-05c2-408c-bb61-d8b0b55fe741
	I0910 20:02:02.949406    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:02.949406    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:02.949406    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:02.949406    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:02.949406    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:03 GMT
	I0910 20:02:02.950060    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 20:02:02.950060    8968 pod_ready.go:93] pod "kube-proxy-wqf2d" in "kube-system" namespace has status "Ready":"True"
	I0910 20:02:02.950060    8968 pod_ready.go:82] duration metric: took 387.5282ms for pod "kube-proxy-wqf2d" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:02.950060    8968 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:03.147199    8968 request.go:632] Waited for 197.0294ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629100
	I0910 20:02:03.147493    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629100
	I0910 20:02:03.147493    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:03.147493    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:03.147622    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:03.151102    8968 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 20:02:03.151102    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:03.151102    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:03.151102    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:03.151102    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:03 GMT
	I0910 20:02:03.151102    8968 round_trippers.go:580]     Audit-Id: 03637e95-c1ac-41d6-92b0-8cb038cb0cd0
	I0910 20:02:03.151102    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:03.151102    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:03.151348    8968 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-629100","namespace":"kube-system","uid":"fe93d1a3-98c4-44c9-b1fb-9d90a97b97c7","resourceVersion":"1757","creationTimestamp":"2024-09-10T19:35:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4f54b0774708cd0189f8ef861ea931f0","kubernetes.io/config.mirror":"4f54b0774708cd0189f8ef861ea931f0","kubernetes.io/config.seen":"2024-09-10T19:35:40.972010383Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-10T19:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0910 20:02:03.350153    8968 request.go:632] Waited for 198.0549ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 20:02:03.350153    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes/multinode-629100
	I0910 20:02:03.350350    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:03.350350    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:03.350350    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:03.353719    8968 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 20:02:03.353719    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:03.353719    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:03.353719    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:03.353719    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:03.353955    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:03.353955    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:03 GMT
	I0910 20:02:03.353955    8968 round_trippers.go:580]     Audit-Id: d8763ea0-2c31-49bb-b78e-5d9871fcf628
	I0910 20:02:03.353955    8968 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"1777","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-10T19:35:37Z","fieldsType":"FieldsV1","f [truncated 5231 chars]
	I0910 20:02:03.354591    8968 pod_ready.go:93] pod "kube-scheduler-multinode-629100" in "kube-system" namespace has status "Ready":"True"
	I0910 20:02:03.354591    8968 pod_ready.go:82] duration metric: took 404.5028ms for pod "kube-scheduler-multinode-629100" in "kube-system" namespace to be "Ready" ...
	I0910 20:02:03.354591    8968 pod_ready.go:39] duration metric: took 1.6117576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 20:02:03.354702    8968 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 20:02:03.363114    8968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 20:02:03.390754    8968 system_svc.go:56] duration metric: took 36.0494ms WaitForService to wait for kubelet
	I0910 20:02:03.390865    8968 kubeadm.go:582] duration metric: took 16.3783364s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 20:02:03.390865    8968 node_conditions.go:102] verifying NodePressure condition ...
	I0910 20:02:03.554628    8968 request.go:632] Waited for 163.3224ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.215.172:8443/api/v1/nodes
	I0910 20:02:03.554628    8968 round_trippers.go:463] GET https://172.31.215.172:8443/api/v1/nodes
	I0910 20:02:03.554628    8968 round_trippers.go:469] Request Headers:
	I0910 20:02:03.554628    8968 round_trippers.go:473]     Accept: application/json, */*
	I0910 20:02:03.554628    8968 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0910 20:02:03.558687    8968 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 20:02:03.558687    8968 round_trippers.go:577] Response Headers:
	I0910 20:02:03.558687    8968 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af01f40a-c5e0-496e-b24d-84f30c066c93
	I0910 20:02:03.558687    8968 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 19a9104d-dfcf-4f6c-9fbc-027a880a6b3f
	I0910 20:02:03.558687    8968 round_trippers.go:580]     Date: Tue, 10 Sep 2024 20:02:03 GMT
	I0910 20:02:03.558687    8968 round_trippers.go:580]     Audit-Id: 177a5a79-ffcb-4c28-8074-bd26786395c8
	I0910 20:02:03.558687    8968 round_trippers.go:580]     Cache-Control: no-cache, private
	I0910 20:02:03.558687    8968 round_trippers.go:580]     Content-Type: application/json
	I0910 20:02:03.558687    8968 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2180"},"items":[{"metadata":{"name":"multinode-629100","uid":"17c79a66-8b57-4b1b-9691-b6ecc9f8f103","resourceVersion":"2180","creationTimestamp":"2024-09-10T19:35:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-629100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"37b4bace07cd53444288cad630e4db4b688b8c18","minikube.k8s.io/name":"multinode-629100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_10T19_35_42_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14833 chars]
	I0910 20:02:03.559808    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 20:02:03.559808    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 20:02:03.559808    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 20:02:03.559808    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 20:02:03.559808    8968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 20:02:03.559808    8968 node_conditions.go:123] node cpu capacity is 2
	I0910 20:02:03.559808    8968 node_conditions.go:105] duration metric: took 168.932ms to run NodePressure ...
	I0910 20:02:03.559808    8968 start.go:241] waiting for startup goroutines ...
	I0910 20:02:03.559902    8968 start.go:255] writing updated cluster config ...
	I0910 20:02:03.568819    8968 ssh_runner.go:195] Run: rm -f paused
	I0910 20:02:03.687767    8968 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 20:02:03.691253    8968 out.go:177] * Done! kubectl is now configured to use "multinode-629100" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.044493596Z" level=warning msg="cleaning up after shim disconnected" id=d78644ad6da20ca56d201dd6cd44531fe1c8fee7f864b5340426d9b70599b073 namespace=moby
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.044502597Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.068856493Z" level=warning msg="cleanup warnings time=\"2024-09-10T19:57:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.841955713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.842886209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.842969817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.843252246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.851949137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.852000143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.852012044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:57:18 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:18.852090752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:57:19 multinode-629100 cri-dockerd[1352]: time="2024-09-10T19:57:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b31945a718c27b4c2824e7c953e0e3d304fbe757e13f3f697c3ba45e7a7d1b82/resolv.conf as [nameserver 172.31.208.1]"
	Sep 10 19:57:19 multinode-629100 cri-dockerd[1352]: time="2024-09-10T19:57:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/878e8a395dfe7f550c4ffccd43f2633e5c04bddac7fe6e3cee8bed5e38f92307/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 10 19:57:19 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:19.325943412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 19:57:19 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:19.326116230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 19:57:19 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:19.326134932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:57:19 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:19.326807799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:57:19 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:19.354305976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 19:57:19 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:19.354512096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 19:57:19 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:19.354613907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:57:19 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:19.354871733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:57:29 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:29.863756118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 19:57:29 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:29.863833428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 19:57:29 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:29.863852531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 19:57:29 multinode-629100 dockerd[1088]: time="2024-09-10T19:57:29.865278216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0592e8e987e86       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       2                   1382ad57fdb76       storage-provisioner
	07fb60b2369e2       8c811b4aec35f                                                                                         5 minutes ago       Running             busybox                   1                   878e8a395dfe7       busybox-7dff88458-lzs87
	bba13e3979fe6       cbb01a7bd410d                                                                                         5 minutes ago       Running             coredns                   1                   b31945a718c27       coredns-6f6b679f8f-srtv8
	79ce262b775dd       12968670680f4                                                                                         6 minutes ago       Running             kindnet-cni               1                   c165f79ee0c66       kindnet-lj2v2
	8df28a487cc9c       ad83b2ca7b09e                                                                                         6 minutes ago       Running             kube-proxy                1                   4d2ed7f661678       kube-proxy-wqf2d
	d78644ad6da20       6e38f40d628db                                                                                         6 minutes ago       Exited              storage-provisioner       1                   1382ad57fdb76       storage-provisioner
	5371b75c6a4eb       2e96e5913fc06                                                                                         6 minutes ago       Running             etcd                      0                   f5952139dd10d       etcd-multinode-629100
	6c4b89f91c728       604f5db92eaa8                                                                                         6 minutes ago       Running             kube-apiserver            0                   8b41f5de76aa6       kube-apiserver-multinode-629100
	c6849e798f8b7       1766f54c897f0                                                                                         6 minutes ago       Running             kube-scheduler            1                   d862d30a973e0       kube-scheduler-multinode-629100
	1dc6a0b68f7be       045733566833c                                                                                         6 minutes ago       Running             kube-controller-manager   1                   af4deda9f3e84       kube-controller-manager-multinode-629100
	b1a88f7f52270       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   ea5e1070e7dea       busybox-7dff88458-lzs87
	039fd49f157a9       cbb01a7bd410d                                                                                         27 minutes ago      Exited              coredns                   0                   bf116f91589fc       coredns-6f6b679f8f-srtv8
	33f88ed7aee25       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              27 minutes ago      Exited              kindnet-cni               0                   1d92603202b00       kindnet-lj2v2
	85b03f4986715       ad83b2ca7b09e                                                                                         27 minutes ago      Exited              kube-proxy                0                   4e550827f00f7       kube-proxy-wqf2d
	5cb559fed2d8a       1766f54c897f0                                                                                         27 minutes ago      Exited              kube-scheduler            0                   49d9c6949234d       kube-scheduler-multinode-629100
	ea7220d439d1b       045733566833c                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   db7037ca07a46       kube-controller-manager-multinode-629100
	
	
	==> coredns [039fd49f157a] <==
	[INFO] 10.244.0.3:49423 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000092007s
	[INFO] 10.244.0.3:43701 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095107s
	[INFO] 10.244.0.3:51536 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000048303s
	[INFO] 10.244.0.3:59362 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00028942s
	[INFO] 10.244.0.3:37417 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014271s
	[INFO] 10.244.0.3:50609 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077805s
	[INFO] 10.244.0.3:45492 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014891s
	[INFO] 10.244.1.2:47303 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100307s
	[INFO] 10.244.1.2:50959 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013871s
	[INFO] 10.244.1.2:34061 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064904s
	[INFO] 10.244.1.2:33504 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059204s
	[INFO] 10.244.0.3:44472 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014991s
	[INFO] 10.244.0.3:51126 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130209s
	[INFO] 10.244.0.3:35880 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062805s
	[INFO] 10.244.0.3:47290 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114308s
	[INFO] 10.244.1.2:59801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127909s
	[INFO] 10.244.1.2:44820 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105107s
	[INFO] 10.244.1.2:51097 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000169412s
	[INFO] 10.244.1.2:50721 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136709s
	[INFO] 10.244.0.3:48616 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000315622s
	[INFO] 10.244.0.3:45256 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000171611s
	[INFO] 10.244.0.3:51021 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000077906s
	[INFO] 10.244.0.3:42471 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135209s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bba13e3979fe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3bbd098fc214dc6dfa00c568b7eace025b603ea701d85ff6422fce82c71ce8b3031aaaf62adfe342d1a3f5f0bf1be6f08c4386d35c48cea8ace4e1727588bef9
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48913 - 35653 "HINFO IN 6807478851987409090.9100571777494782227. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.075144087s
	
	
	==> describe nodes <==
	Name:               multinode-629100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-629100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=multinode-629100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T19_35_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 19:35:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-629100
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 20:03:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 20:02:03 +0000   Tue, 10 Sep 2024 19:35:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 20:02:03 +0000   Tue, 10 Sep 2024 19:35:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 20:02:03 +0000   Tue, 10 Sep 2024 19:35:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 20:02:03 +0000   Tue, 10 Sep 2024 19:57:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.215.172
	  Hostname:    multinode-629100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d19cefb91db9497b816b4a43c361a0ab
	  System UUID:                e294be3b-926e-3f4f-8147-8c2e1d6d31e8
	  Boot ID:                    65691e6c-346f-4af5-abb6-c00142d61fbf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lzs87                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 coredns-6f6b679f8f-srtv8                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 etcd-multinode-629100                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m22s
	  kube-system                 kindnet-lj2v2                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      27m
	  kube-system                 kube-apiserver-multinode-629100             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-controller-manager-multinode-629100    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-wqf2d                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-multinode-629100             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 27m                    kube-proxy       
	  Normal  Starting                 6m21s                  kube-proxy       
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-629100 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-629100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-629100 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27m                    node-controller  Node multinode-629100 event: Registered Node multinode-629100 in Controller
	  Normal  NodeReady                27m                    kubelet          Node multinode-629100 status is now: NodeReady
	  Normal  Starting                 6m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m28s (x8 over 6m28s)  kubelet          Node multinode-629100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m28s (x8 over 6m28s)  kubelet          Node multinode-629100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m28s (x7 over 6m28s)  kubelet          Node multinode-629100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m20s                  node-controller  Node multinode-629100 event: Registered Node multinode-629100 in Controller
	
	
	Name:               multinode-629100-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-629100-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=multinode-629100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T19_59_26_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 19:59:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-629100-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 20:02:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 19:59:42 +0000   Tue, 10 Sep 2024 19:59:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 19:59:42 +0000   Tue, 10 Sep 2024 19:59:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 19:59:42 +0000   Tue, 10 Sep 2024 19:59:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 19:59:42 +0000   Tue, 10 Sep 2024 19:59:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.210.34
	  Hostname:    multinode-629100-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf73bb2604364adb85232c437f2a54e5
	  System UUID:                0fc9d8ea-7869-bd42-95ee-012842e5540a
	  Boot ID:                    e782fa6a-1fc5-40ad-b179-77ffc1e8f660
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t8d6l    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 kindnet-5crht              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	  kube-system                 kube-proxy-qqrrg           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m40s                  kube-proxy       
	  Normal  Starting                 24m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x2 over 24m)      kubelet          Node multinode-629100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x2 over 24m)      kubelet          Node multinode-629100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x2 over 24m)      kubelet          Node multinode-629100-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                24m                    kubelet          Node multinode-629100-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m44s (x2 over 3m44s)  kubelet          Node multinode-629100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m44s (x2 over 3m44s)  kubelet          Node multinode-629100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m44s (x2 over 3m44s)  kubelet          Node multinode-629100-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m39s                  node-controller  Node multinode-629100-m02 event: Registered Node multinode-629100-m02 in Controller
	  Normal  NodeReady                3m27s                  kubelet          Node multinode-629100-m02 status is now: NodeReady
	
	
	Name:               multinode-629100-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-629100-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=multinode-629100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T20_01_46_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 20:01:46 +0000
	Taints:             node.kubernetes.io/unschedulable:NoSchedule
	Unschedulable:      true
	Lease:
	  HolderIdentity:  multinode-629100-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 20:03:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 20:02:01 +0000   Tue, 10 Sep 2024 20:01:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 20:02:01 +0000   Tue, 10 Sep 2024 20:01:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 20:02:01 +0000   Tue, 10 Sep 2024 20:01:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 20:02:01 +0000   Tue, 10 Sep 2024 20:02:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.214.220
	  Hostname:    multinode-629100-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d23f1ff9857046be89b402522869a516
	  System UUID:                706c2b98-3b56-344f-94d7-74a48fb097d3
	  Boot ID:                    e4b979d4-1a65-4168-8619-a4804df70d72
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6tdpv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	  kube-system                 kube-proxy-4tzx6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 79s                kube-proxy       
	  Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-629100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-629100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-629100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                19m                kubelet          Node multinode-629100-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node multinode-629100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node multinode-629100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node multinode-629100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                10m                kubelet          Node multinode-629100-m03 status is now: NodeReady
	  Normal  Starting                 83s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  83s (x2 over 83s)  kubelet          Node multinode-629100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s (x2 over 83s)  kubelet          Node multinode-629100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x2 over 83s)  kubelet          Node multinode-629100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           79s                node-controller  Node multinode-629100-m03 event: Registered Node multinode-629100-m03 in Controller
	  Normal  NodeReady                68s                kubelet          Node multinode-629100-m03 status is now: NodeReady
	  Normal  NodeNotSchedulable       12s                kubelet          Node multinode-629100-m03 status is now: NodeNotSchedulable
	
	
	==> dmesg <==
	              * this clock source is slow. Consider trying other clock sources
	[  +6.228148] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.707768] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.867413] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.281183] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep10 19:56] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.181344] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[ +23.772473] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +0.080755] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.467546] systemd-fstab-generator[1048]: Ignoring "noauto" option for root device
	[  +0.176877] systemd-fstab-generator[1060]: Ignoring "noauto" option for root device
	[  +0.208774] systemd-fstab-generator[1074]: Ignoring "noauto" option for root device
	[  +2.885128] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.162199] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[  +0.160750] systemd-fstab-generator[1329]: Ignoring "noauto" option for root device
	[  +0.237614] systemd-fstab-generator[1344]: Ignoring "noauto" option for root device
	[  +0.793443] systemd-fstab-generator[1473]: Ignoring "noauto" option for root device
	[  +0.074688] kauditd_printk_skb: 202 callbacks suppressed
	[  +2.886212] systemd-fstab-generator[1614]: Ignoring "noauto" option for root device
	[  +5.954658] kauditd_printk_skb: 84 callbacks suppressed
	[  +3.913261] systemd-fstab-generator[2466]: Ignoring "noauto" option for root device
	[Sep10 19:57] kauditd_printk_skb: 72 callbacks suppressed
	[ +11.857050] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [5371b75c6a4e] <==
	{"level":"info","ts":"2024-09-10T19:56:43.603537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d523ecf11423acf switched to configuration voters=(2112820234258889423)"}
	{"level":"info","ts":"2024-09-10T19:56:43.604076Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8392dc51522b279d","local-member-id":"1d523ecf11423acf","added-peer-id":"1d523ecf11423acf","added-peer-peer-urls":["https://172.31.210.71:2380"]}
	{"level":"info","ts":"2024-09-10T19:56:43.604575Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8392dc51522b279d","local-member-id":"1d523ecf11423acf","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T19:56:43.605627Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T19:56:43.623084Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-10T19:56:43.635867Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"172.31.215.172:2380"}
	{"level":"info","ts":"2024-09-10T19:56:43.641021Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"172.31.215.172:2380"}
	{"level":"info","ts":"2024-09-10T19:56:43.641152Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"1d523ecf11423acf","initial-advertise-peer-urls":["https://172.31.215.172:2380"],"listen-peer-urls":["https://172.31.215.172:2380"],"advertise-client-urls":["https://172.31.215.172:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.31.215.172:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-10T19:56:43.645591Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-10T19:56:45.149423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d523ecf11423acf is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-10T19:56:45.149537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d523ecf11423acf became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-10T19:56:45.149583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d523ecf11423acf received MsgPreVoteResp from 1d523ecf11423acf at term 2"}
	{"level":"info","ts":"2024-09-10T19:56:45.149596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d523ecf11423acf became candidate at term 3"}
	{"level":"info","ts":"2024-09-10T19:56:45.149606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d523ecf11423acf received MsgVoteResp from 1d523ecf11423acf at term 3"}
	{"level":"info","ts":"2024-09-10T19:56:45.149617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d523ecf11423acf became leader at term 3"}
	{"level":"info","ts":"2024-09-10T19:56:45.149637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1d523ecf11423acf elected leader 1d523ecf11423acf at term 3"}
	{"level":"info","ts":"2024-09-10T19:56:45.154855Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1d523ecf11423acf","local-member-attributes":"{Name:multinode-629100 ClientURLs:[https://172.31.215.172:2379]}","request-path":"/0/members/1d523ecf11423acf/attributes","cluster-id":"8392dc51522b279d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T19:56:45.154972Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T19:56:45.155651Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T19:56:45.155828Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-10T19:56:45.155951Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T19:56:45.158005Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T19:56:45.159101Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-10T19:56:45.158078Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T19:56:45.160906Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.31.215.172:2379"}
	
	
	==> kernel <==
	 20:03:09 up 7 min,  0 users,  load average: 0.35, 0.18, 0.08
	Linux multinode-629100 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [33f88ed7aee2] <==
	I0910 19:53:45.159789       1 main.go:322] Node multinode-629100-m03 has CIDR [10.244.3.0/24] 
	I0910 19:53:55.153125       1 main.go:295] Handling node with IPs: map[172.31.210.71:{}]
	I0910 19:53:55.153224       1 main.go:299] handling current node
	I0910 19:53:55.153249       1 main.go:295] Handling node with IPs: map[172.31.209.0:{}]
	I0910 19:53:55.153264       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	I0910 19:53:55.153453       1 main.go:295] Handling node with IPs: map[172.31.210.110:{}]
	I0910 19:53:55.153476       1 main.go:322] Node multinode-629100-m03 has CIDR [10.244.3.0/24] 
	I0910 19:54:05.152730       1 main.go:295] Handling node with IPs: map[172.31.210.71:{}]
	I0910 19:54:05.152911       1 main.go:299] handling current node
	I0910 19:54:05.152928       1 main.go:295] Handling node with IPs: map[172.31.209.0:{}]
	I0910 19:54:05.152947       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	I0910 19:54:05.153065       1 main.go:295] Handling node with IPs: map[172.31.210.110:{}]
	I0910 19:54:05.153236       1 main.go:322] Node multinode-629100-m03 has CIDR [10.244.3.0/24] 
	I0910 19:54:15.155625       1 main.go:295] Handling node with IPs: map[172.31.210.110:{}]
	I0910 19:54:15.155896       1 main.go:322] Node multinode-629100-m03 has CIDR [10.244.3.0/24] 
	I0910 19:54:15.156166       1 main.go:295] Handling node with IPs: map[172.31.210.71:{}]
	I0910 19:54:15.156260       1 main.go:299] handling current node
	I0910 19:54:15.156356       1 main.go:295] Handling node with IPs: map[172.31.209.0:{}]
	I0910 19:54:15.156465       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	I0910 19:54:25.160608       1 main.go:295] Handling node with IPs: map[172.31.210.110:{}]
	I0910 19:54:25.160926       1 main.go:322] Node multinode-629100-m03 has CIDR [10.244.3.0/24] 
	I0910 19:54:25.161180       1 main.go:295] Handling node with IPs: map[172.31.210.71:{}]
	I0910 19:54:25.161345       1 main.go:299] handling current node
	I0910 19:54:25.161445       1 main.go:295] Handling node with IPs: map[172.31.209.0:{}]
	I0910 19:54:25.161537       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [79ce262b775d] <==
	I0910 20:02:29.409275       1 main.go:322] Node multinode-629100-m03 has CIDR [10.244.2.0/24] 
	I0910 20:02:39.410628       1 main.go:295] Handling node with IPs: map[172.31.210.34:{}]
	I0910 20:02:39.410743       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	I0910 20:02:39.410882       1 main.go:295] Handling node with IPs: map[172.31.214.220:{}]
	I0910 20:02:39.410966       1 main.go:322] Node multinode-629100-m03 has CIDR [10.244.2.0/24] 
	I0910 20:02:39.411125       1 main.go:295] Handling node with IPs: map[172.31.215.172:{}]
	I0910 20:02:39.411224       1 main.go:299] handling current node
	I0910 20:02:49.408607       1 main.go:295] Handling node with IPs: map[172.31.215.172:{}]
	I0910 20:02:49.408712       1 main.go:299] handling current node
	I0910 20:02:49.408800       1 main.go:295] Handling node with IPs: map[172.31.210.34:{}]
	I0910 20:02:49.408827       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	I0910 20:02:49.409250       1 main.go:295] Handling node with IPs: map[172.31.214.220:{}]
	I0910 20:02:49.409276       1 main.go:322] Node multinode-629100-m03 has CIDR [10.244.2.0/24] 
	I0910 20:02:59.413995       1 main.go:295] Handling node with IPs: map[172.31.214.220:{}]
	I0910 20:02:59.414027       1 main.go:322] Node multinode-629100-m03 has CIDR [10.244.2.0/24] 
	I0910 20:02:59.414544       1 main.go:295] Handling node with IPs: map[172.31.215.172:{}]
	I0910 20:02:59.414633       1 main.go:299] handling current node
	I0910 20:02:59.415364       1 main.go:295] Handling node with IPs: map[172.31.210.34:{}]
	I0910 20:02:59.415576       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	I0910 20:03:09.416378       1 main.go:295] Handling node with IPs: map[172.31.214.220:{}]
	I0910 20:03:09.416679       1 main.go:322] Node multinode-629100-m03 has CIDR [10.244.2.0/24] 
	I0910 20:03:09.417018       1 main.go:295] Handling node with IPs: map[172.31.215.172:{}]
	I0910 20:03:09.417080       1 main.go:299] handling current node
	I0910 20:03:09.417097       1 main.go:295] Handling node with IPs: map[172.31.210.34:{}]
	I0910 20:03:09.417104       1 main.go:322] Node multinode-629100-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [6c4b89f91c72] <==
	I0910 19:56:46.475771       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0910 19:56:46.475798       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0910 19:56:46.478887       1 aggregator.go:171] initial CRD sync complete...
	I0910 19:56:46.479090       1 autoregister_controller.go:144] Starting autoregister controller
	I0910 19:56:46.479270       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0910 19:56:46.480187       1 cache.go:39] Caches are synced for autoregister controller
	I0910 19:56:46.518495       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0910 19:56:46.540111       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0910 19:56:46.540210       1 policy_source.go:224] refreshing policies
	I0910 19:56:46.548245       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0910 19:56:46.550689       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0910 19:56:46.551091       1 shared_informer.go:320] Caches are synced for configmaps
	I0910 19:56:46.554884       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0910 19:56:46.555100       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0910 19:56:46.560946       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0910 19:56:46.569884       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0910 19:56:47.356019       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0910 19:56:47.771678       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.31.215.172]
	I0910 19:56:47.775512       1 controller.go:615] quota admission added evaluator for: endpoints
	I0910 19:56:47.791959       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0910 19:56:49.050278       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0910 19:56:49.226365       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0910 19:56:49.241490       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0910 19:56:49.348980       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0910 19:56:49.359102       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [1dc6a0b68f7b] <==
	I0910 19:59:52.732434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.307µs"
	I0910 19:59:52.737346       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="35.205µs"
	I0910 19:59:53.763781       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.484894ms"
	I0910 19:59:53.764137       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.208µs"
	I0910 20:01:36.231192       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:01:36.253589       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:01:40.989515       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-629100-m02"
	I0910 20:01:40.989849       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:01:46.552799       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-629100-m03\" does not exist"
	I0910 20:01:46.552860       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-629100-m02"
	I0910 20:01:46.567605       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-629100-m03" podCIDRs=["10.244.2.0/24"]
	I0910 20:01:46.567688       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:01:46.567708       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:01:46.590490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:01:46.732405       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:01:47.253334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:01:50.252496       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:01:56.654527       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:02:01.777048       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-629100-m02"
	I0910 20:02:01.777404       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:02:01.795533       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:02:03.670236       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100"
	I0910 20:02:05.157627       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:02:48.285749       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 20:02:48.303731       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	
	
	==> kube-controller-manager [ea7220d439d1] <==
	I0910 19:50:05.897148       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-629100-m02"
	I0910 19:50:05.899096       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:50:05.923857       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:50:11.040217       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:19.516846       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:19.534024       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:24.104286       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:24.104342       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-629100-m02"
	I0910 19:52:29.743528       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-629100-m02"
	I0910 19:52:29.744004       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-629100-m03\" does not exist"
	I0910 19:52:29.771781       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-629100-m03" podCIDRs=["10.244.3.0/24"]
	I0910 19:52:29.773227       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:29.773553       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:29.857215       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:30.388502       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:31.071748       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:39.931055       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:45.228950       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:45.229037       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-629100-m02"
	I0910 19:52:45.244148       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:52:45.992858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:54:11.025904       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:54:11.025988       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-629100-m02"
	I0910 19:54:11.054844       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	I0910 19:54:16.335919       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-629100-m03"
	
	
	==> kube-proxy [85b03f498671] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 19:35:47.926887       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 19:35:47.936949       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.31.210.71"]
	E0910 19:35:47.937088       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 19:35:47.985558       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 19:35:47.985667       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 19:35:47.985694       1 server_linux.go:169] "Using iptables Proxier"
	I0910 19:35:47.989351       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 19:35:47.989836       1 server.go:483] "Version info" version="v1.31.0"
	I0910 19:35:47.989943       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 19:35:47.992068       1 config.go:197] "Starting service config controller"
	I0910 19:35:47.994045       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 19:35:47.994294       1 config.go:326] "Starting node config controller"
	I0910 19:35:47.994439       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 19:35:47.993518       1 config.go:104] "Starting endpoint slice config controller"
	I0910 19:35:47.996484       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 19:35:48.095182       1 shared_informer.go:320] Caches are synced for service config
	I0910 19:35:48.095444       1 shared_informer.go:320] Caches are synced for node config
	I0910 19:35:48.097751       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [8df28a487cc9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 19:56:48.308685       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 19:56:48.377764       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.31.215.172"]
	E0910 19:56:48.377940       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 19:56:48.505658       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 19:56:48.505871       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 19:56:48.506111       1 server_linux.go:169] "Using iptables Proxier"
	I0910 19:56:48.512108       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 19:56:48.515499       1 server.go:483] "Version info" version="v1.31.0"
	I0910 19:56:48.515674       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 19:56:48.526692       1 config.go:197] "Starting service config controller"
	I0910 19:56:48.526841       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 19:56:48.527004       1 config.go:104] "Starting endpoint slice config controller"
	I0910 19:56:48.528318       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 19:56:48.548045       1 config.go:326] "Starting node config controller"
	I0910 19:56:48.548072       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 19:56:48.628236       1 shared_informer.go:320] Caches are synced for service config
	I0910 19:56:48.628522       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 19:56:48.650394       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5cb559fed2d8] <==
	E0910 19:35:38.864572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:38.875237       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0910 19:35:38.875432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:38.900948       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0910 19:35:38.900977       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:38.957305       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0910 19:35:38.957506       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0910 19:35:38.997653       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0910 19:35:38.997837       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:39.004298       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0910 19:35:39.004563       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:39.017869       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0910 19:35:39.017920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:39.089188       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 19:35:39.089469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:39.288341       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 19:35:39.288858       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:39.326675       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0910 19:35:39.327101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:39.349957       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0910 19:35:39.350170       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 19:35:39.392655       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0910 19:35:39.392930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0910 19:35:40.833153       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0910 19:54:30.585174       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c6849e798f8b] <==
	I0910 19:56:44.356001       1 serving.go:386] Generated self-signed cert in-memory
	W0910 19:56:46.395195       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0910 19:56:46.395391       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0910 19:56:46.395485       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0910 19:56:46.395664       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0910 19:56:46.485826       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0910 19:56:46.486041       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 19:56:46.492219       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0910 19:56:46.492218       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0910 19:56:46.496339       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0910 19:56:46.492305       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0910 19:56:46.597834       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 19:58:41 multinode-629100 kubelet[1621]: E0910 19:58:41.720028    1621 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 19:58:41 multinode-629100 kubelet[1621]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 19:58:41 multinode-629100 kubelet[1621]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 19:58:41 multinode-629100 kubelet[1621]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 19:58:41 multinode-629100 kubelet[1621]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 19:59:41 multinode-629100 kubelet[1621]: E0910 19:59:41.719702    1621 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 19:59:41 multinode-629100 kubelet[1621]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 19:59:41 multinode-629100 kubelet[1621]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 19:59:41 multinode-629100 kubelet[1621]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 19:59:41 multinode-629100 kubelet[1621]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 20:00:41 multinode-629100 kubelet[1621]: E0910 20:00:41.720916    1621 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 20:00:41 multinode-629100 kubelet[1621]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 20:00:41 multinode-629100 kubelet[1621]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 20:00:41 multinode-629100 kubelet[1621]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 20:00:41 multinode-629100 kubelet[1621]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 20:01:41 multinode-629100 kubelet[1621]: E0910 20:01:41.721889    1621 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 20:01:41 multinode-629100 kubelet[1621]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 20:01:41 multinode-629100 kubelet[1621]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 20:01:41 multinode-629100 kubelet[1621]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 20:01:41 multinode-629100 kubelet[1621]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 20:02:41 multinode-629100 kubelet[1621]: E0910 20:02:41.720325    1621 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 20:02:41 multinode-629100 kubelet[1621]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 20:02:41 multinode-629100 kubelet[1621]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 20:02:41 multinode-629100 kubelet[1621]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 20:02:41 multinode-629100 kubelet[1621]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-629100 -n multinode-629100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-629100 -n multinode-629100: (10.5301344s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-629100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeleteNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeleteNode (47.29s)

                                                
                                    
x
+
TestKubernetesUpgrade (10800.395s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-885000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-885000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (5m23.7178806s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-885000
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-885000: (32.9891475s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-885000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-885000 status --format={{.Host}}: exit status 7 (2.2450827s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-885000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperv
panic: test timed out after 3h0m0s
running tests:
	TestForceSystemdFlag (2m22s)
	TestKubernetesUpgrade (7m16s)
	TestRunningBinaryUpgrade (12m54s)
	TestStartStop (12m54s)
	TestStoppedBinaryUpgrade (6m58s)
	TestStoppedBinaryUpgrade/Upgrade (6m57s)

                                                
                                                
goroutine 2639 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 7 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0004f6b60, 0xc0011edbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000008600, {0x4f65e40, 0x2a, 0x2a}, {0x2985b6d?, 0x5380cf?, 0x4f899c0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc00012c960)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc00012c960)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 7 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00078f200)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 43 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 50
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 2645 [syscall, 7 minutes, locked to thread]:
syscall.SyscallN(0x7ffddc8b4e10?, {0xc0013436a8?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x174, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc000764f90)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0011ec180)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0011ec180)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0015829c0, 0xc0011ec180)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2.1()
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:183 +0x385
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:18 +0x13
github.com/cenkalti/backoff/v4.doRetryNotify[...](0xc001343c20?, {0x398bab8, 0xc0009b9900}, 0x3415770, {0x0, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:88 +0x132
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0x0?, {0x398bab8?, 0xc0009b9900?}, 0x40?, {0x0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:61 +0x5c
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0xc0014efe28, 0x3b9aca00, 0x1a3185c5000, {0xc0014efd08?, 0x239de60?, 0x4cf2a8?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/util/retry/retry.go:60 +0xef
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2(0xc0015829c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:188 +0x2de
testing.tRunner(0xc0015829c0, 0xc001282cc0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2540
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1103 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0006da690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0004f7380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0004f7380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertExpiration(0xc0004f7380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc0004f7380, 0x3414480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 113 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0012828d0, 0x3b)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0xc001391d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x39c0120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001282900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006b4000, {0x397f1c0, 0xc0012cc000}, 0x1, 0xc0006d60c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006b4000, 0x3b9aca00, 0x0, 0x1, 0xc0006d60c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 134
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 1403 [chan receive, 135 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000211ec0, 0xc0006d60c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1313
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 134 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001282900, 0xc0006d60c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 132
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2634 [syscall, locked to thread]:
syscall.SyscallN(0xc0006a4380?, {0xc0012ebb20?, 0x0?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x0?, 0x0?, 0x3fb2082789bf2e66?, 0x0?, 0x41486a0000000000?, 0x0?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x798, {0xc0012f9c7f?, 0x381, 0x5341df?}, 0xc0009b8f60?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00142c288?, {0xc0012f9c7f?, 0x8000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00142c288, {0xc0012f9c7f, 0x381, 0x381})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a65c0, {0xc0012f9c7f?, 0xc0012ebd98?, 0x3e3f?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00160e0c0, {0x397dd40, 0xc0006b20f0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x397de80, 0xc00160e0c0}, {0x397dd40, 0xc0006b20f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x397de80, 0xc00160e0c0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x480c56?, {0x397de80?, 0xc00160e0c0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x397de80, 0xc00160e0c0}, {0x397de00, 0xc0000a65c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2539
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 133 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x399bb80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 132
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 146 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x39a54d0, 0xc0006d60c0}, 0xc0007c1f50, 0xc0007c1f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x39a54d0, 0xc0006d60c0}, 0xa0?, 0xc0007c1f50, 0xc0007c1f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x39a54d0?, 0xc0006d60c0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0007c1fd0?, 0x60e4a4?, 0xc0008c84e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 134
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 1122 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0006da690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0004f7d40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0004f7d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc0004f7d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:146 +0x92
testing.tRunner(0xc0004f7d40, 0x34144c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1102 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0006da690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0004f71e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0004f71e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc0004f71e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:36 +0x92
testing.tRunner(0xc0004f71e0, 0x3414488)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 147 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 146
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1105 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x7ffddc8b4e10?, {0xc00138ba80?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x764, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0018e1830)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0011ec780)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0011ec780)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0004f7ba0, 0xc0011ec780)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc0004f7ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:91 +0x347
testing.tRunner(0xc0004f7ba0, 0x34144c8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2633 [syscall, locked to thread]:
syscall.SyscallN(0x497ec5?, {0xc0012efb20?, 0x241eed8?, 0xc0012efb58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x48fdf6?, 0x5016e40?, 0xc0012efbf8?, 0x48283b?, 0x1d52e470108?, 0xc002b04035?, 0x478ba6?, 0xc000830000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x2a0, {0xc001655aad?, 0x553, 0x5341df?}, 0x239de60?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0011e8508?, {0xc001655aad?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0011e8508, {0xc001655aad, 0x553, 0x553})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6578, {0xc001655aad?, 0x1d5739a3dd8?, 0x20c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00160e090, {0x397dd40, 0xc0007dc010})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x397de80, 0xc00160e090}, {0x397dd40, 0xc0007dc010}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x397de80, 0xc00160e090})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x480c56?, {0x397de80?, 0xc00160e090?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x397de80, 0xc00160e090}, {0x397de00, 0xc0000a6578}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc00091c060?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2539
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2617 [select]:
os/exec.(*Cmd).watchCtx(0xc001528000, 0xc0017003c0)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2541
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 2663 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc001381b20?, 0x4f92aa0?, 0xc0008ee1e0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc001381b88?, 0x4f889b?, 0xc001381bf8?, 0x48283b?, 0x6?, 0xc001381ba8?, 0x5c2d2f?, 0xc00016e0d0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x214, {0xc0015a1a10?, 0x5f0, 0x0?}, 0x60?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00135a008?, {0xc0015a1a10?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00135a008, {0xc0015a1a10, 0x5f0, 0x5f0})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006b27d8, {0xc0015a1a10?, 0x0?, 0x210?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0005562d0, {0x397dd40, 0xc000792878})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x397de80, 0xc0005562d0}, {0x397dd40, 0xc000792878}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x292a1e2?, {0x397de80, 0xc0005562d0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x5c8440?, {0x397de80?, 0xc0005562d0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x397de80, 0xc0005562d0}, {0x397de00, 0xc0006b27d8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0x3414580?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1105
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2635 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc0011ec000, 0xc001700480)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2539
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 2463 [chan receive, 13 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000912000, 0x3414790)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2537
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1166 [IO wait, 154 minutes]:
internal/poll.runtime_pollWait(0x1d573b90ad0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0006a1408?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0015022a0, 0xc001261bb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc001502288, 0x300, {0xc00044a4b0?, 0x0?, 0x0?}, 0xc0006a1008?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc001502288, 0xc001261d90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc001502288)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc0008caa40)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0008caa40)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc00021e5a0, {0x3998130, 0xc0008caa40})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc00021e5a0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0008b1860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1163
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 2540 [chan receive, 7 minutes]:
testing.(*T).Run(0xc001440680, {0x292ccda?, 0x3005753e800?}, 0xc001282cc0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc001440680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:160 +0x2bc
testing.tRunner(0xc001440680, 0x34145b8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2464 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0006da690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0009121a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0009121a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0009121a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0009121a0, 0xc001282200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2463
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1373 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1372
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1402 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x399bb80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1313
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2541 [syscall, locked to thread]:
syscall.SyscallN(0x7ffddc8b4e10?, {0xc001737798?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x76c, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0016726c0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001528000)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc001528000)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc001440820, 0xc001528000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc001440820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:243 +0xaff
testing.tRunner(0xc001440820, 0x3414530)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2537 [chan receive, 13 minutes]:
testing.(*T).Run(0xc001440000, {0x2928cc9?, 0x5c73d3?}, 0x3414790)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc001440000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc001440000, 0x34145b0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2477 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0006da690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008b0820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008b0820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0008b0820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:47 +0x39
testing.tRunner(0xc0008b0820, 0x3414568)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1104 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0006da690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0004f7520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0004f7520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestDockerFlags(0xc0004f7520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:43 +0x105
testing.tRunner(0xc0004f7520, 0x3414498)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2539 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x7ffddc8b4e10?, {0xc00126b960?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x79c, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc00164e1b0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0011ec000)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0011ec000)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0014404e0, 0xc0011ec000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc0014404e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:130 +0x788
testing.tRunner(0xc0014404e0, 0x3414590)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1371 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000211e90, 0x31)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0xc0018bfd80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x39c0120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000211ec0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008031c0, {0x397f1c0, 0xc001dc4a80}, 0x1, 0xc0006d60c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008031c0, 0x3b9aca00, 0x0, 0x1, 0xc0006d60c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1403
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 2631 [syscall, 7 minutes, locked to thread]:
syscall.SyscallN(0x497ec5?, {0xc001517b20?, 0x236ed00?, 0xc001517b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x48fdf6?, 0x5016e40?, 0xc001517bf8?, 0x4829a5?, 0x1d52e470a28?, 0x79e935?, 0xc0013dee08?, 0x39a5117?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x7a0, {0xc001616000?, 0x200, 0x0?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0011e8a08?, {0xc001616000?, 0x200?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0011e8a08, {0xc001616000, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006b2690, {0xc001616000?, 0x48283b?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00160f890, {0x397dd40, 0xc0006b26c8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x397de80, 0xc00160f890}, {0x397dd40, 0xc0006b26c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0015281d0?, {0x397de80, 0xc00160f890})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001620d80?, {0x397de80?, 0xc00160f890?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x397de80, 0xc00160f890}, {0x397de00, 0xc0006b2690}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001528180?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2645
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2632 [select, 7 minutes]:
os/exec.(*Cmd).watchCtx(0xc0011ec180, 0xc001621020)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2645
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 1401 [chan send, 135 minutes]:
os/exec.(*Cmd).watchCtx(0xc0011ec900, 0xc0006d7680)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1304
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 1428 [chan send, 133 minutes]:
os/exec.(*Cmd).watchCtx(0xc001528480, 0xc00091c9c0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1427
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 1372 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x39a54d0, 0xc0006d60c0}, 0xc0008e3f50, 0xc0008e3f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x39a54d0, 0xc0006d60c0}, 0xa0?, 0xc0008e3f50, 0xc0008e3f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x39a54d0?, 0xc0006d60c0?}, 0x0?, 0xc0008e3fd0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0008e3fd0?, 0x60e4a4?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1403
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 2563 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0006da690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000912680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000912680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000912680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000912680, 0xc001282440)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2463
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2465 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0006da690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000912340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000912340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000912340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000912340, 0xc001282380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2463
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2615 [syscall, locked to thread]:
syscall.SyscallN(0x497ec5?, {0xc0013edb20?, 0x2372748?, 0xc0013edb58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x48fdf6?, 0x5016e40?, 0xc0013edbf8?, 0x48283b?, 0x1d52e470eb8?, 0xc001464141?, 0x478ba6?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x7ec, {0xc0015a0a0e?, 0x5f2, 0x0?}, 0x10?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000168c88?, {0xc0015a0a0e?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000168c88, {0xc0015a0a0e, 0x5f2, 0x5f2})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6490, {0xc0015a0a0e?, 0x1d573962e48?, 0x20e?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00160e150, {0x397dd40, 0xc000792028})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x397de80, 0xc00160e150}, {0x397dd40, 0xc000792028}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x397de80, 0xc00160e150})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x480c56?, {0x397de80?, 0xc00160e150?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x397de80, 0xc00160e150}, {0x397de00, 0xc0000a6490}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0007de1c0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2541
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2564 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0006da690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000912820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000912820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000912820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000912820, 0xc001282600)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2463
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2562 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0006da690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0009124e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0009124e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0009124e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0009124e0, 0xc001282400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2463
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2616 [syscall, locked to thread]:
syscall.SyscallN(0x497ec5?, {0xc001271b20?, 0x2372748?, 0xc001271b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x48fdf6?, 0x5016e40?, 0xc001271bf8?, 0x4829a5?, 0x1d52e470eb8?, 0x32006300000077?, 0x478ba6?, 0x61003800390066?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x758, {0xc00120a0d7?, 0x1f29, 0x5341df?}, 0x41004100410041?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000169188?, {0xc00120a0d7?, 0x4000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000169188, {0xc00120a0d7, 0x1f29, 0x1f29})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6508, {0xc00120a0d7?, 0x1d5739a3dd8?, 0x1e1c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00160e180, {0x397dd40, 0xc0007dca00})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x397de80, 0xc00160e180}, {0x397dd40, 0xc0007dca00}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001271e78?, {0x397de80, 0xc00160e180})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001271f38?, {0x397de80?, 0xc00160e180?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x397de80, 0xc00160e180}, {0x397de00, 0xc0000a6508}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001700420?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2541
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2630 [syscall, locked to thread]:
syscall.SyscallN(0x497ec5?, {0xc001a59b20?, 0x3da1438?, 0xc001a59b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x48fdf6?, 0x5016e40?, 0xc001a59bf8?, 0x4829a5?, 0x1d52e470eb8?, 0xc001467f4d?, 0x10?, 0x10?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x44c, {0xc00125e275?, 0x58b, 0x5341df?}, 0xc001466b70?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0011e8288?, {0xc00125e275?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0011e8288, {0xc00125e275, 0x58b, 0x58b})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006b2590, {0xc00125e275?, 0xc001e502d0?, 0x23e?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00160f860, {0x397dd40, 0xc0000a6540})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x397de80, 0xc00160f860}, {0x397dd40, 0xc0000a6540}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x10?, {0x397de80, 0xc00160f860})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001a59eb8?, {0x397de80?, 0xc00160f860?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x397de80, 0xc00160f860}, {0x397de00, 0xc0006b2590}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000054fc0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2645
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2665 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc0011ec780, 0xc00091cd80)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1105
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 2664 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x497ec5?, {0xc001259b20?, 0x3da1438?, 0xc001259b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x48fdf6?, 0x5016e40?, 0xc001259bf8?, 0x4829a5?, 0x1d52e470598?, 0x7f9467?, 0x478ba6?, 0x40?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x7a4, {0xc000999bc6?, 0x43a, 0x5341df?}, 0xc00012da10?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00135aa08?, {0xc000999bc6?, 0x2000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00135aa08, {0xc000999bc6, 0x43a, 0x43a})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006b2858, {0xc000999bc6?, 0xc001259d98?, 0x1000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000556420, {0x397dd40, 0xc0007dca58})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x397de80, 0xc000556420}, {0x397dd40, 0xc0007dca58}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x397de80, 0xc000556420})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x480c56?, {0x397de80?, 0xc000556420?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x397de80, 0xc000556420}, {0x397de00, 0xc0006b2858}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0001fed80?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1105
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2565 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0006da690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0009129c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0009129c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0009129c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0009129c0, 0xc0012827c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2463
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (307.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-447800 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-447800 --driver=hyperv: exit status 1 (4m59.7515557s)

                                                
                                                
-- stdout --
	* [NoKubernetes-447800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-447800" primary control-plane node in "NoKubernetes-447800" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-447800 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-447800 -n NoKubernetes-447800
E0910 20:23:54.614227    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-447800 -n NoKubernetes-447800: exit status 7 (7.940495s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 20:23:55.402338    4448 status.go:352] failed to get driver ip: getting IP: IP not found
	E0910 20:23:55.402338    4448 status.go:249] status error: getting IP: IP not found

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-447800" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (307.69s)

                                                
                                    

Test pass (132/201)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 17.13
4 TestDownloadOnly/v1.20.0/preload-exists 0.05
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.45
9 TestDownloadOnly/v1.20.0/DeleteAll 0.57
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.71
12 TestDownloadOnly/v1.31.0/json-events 10.26
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.36
18 TestDownloadOnly/v1.31.0/DeleteAll 0.56
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.55
21 TestBinaryMirror 6.23
22 TestOffline 356.31
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.23
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.22
27 TestAddons/Setup 411.04
29 TestAddons/serial/Volcano 60.98
31 TestAddons/serial/GCPAuth/Namespaces 0.29
34 TestAddons/parallel/Ingress 59.16
35 TestAddons/parallel/InspektorGadget 24.11
36 TestAddons/parallel/MetricsServer 18.69
37 TestAddons/parallel/HelmTiller 25.2
39 TestAddons/parallel/CSI 73.76
40 TestAddons/parallel/Headlamp 48.22
41 TestAddons/parallel/CloudSpanner 18.37
42 TestAddons/parallel/LocalPath 80.46
43 TestAddons/parallel/NvidiaDevicePlugin 18.83
44 TestAddons/parallel/Yakd 25.29
45 TestAddons/StoppedEnableDisable 49.55
57 TestErrorSpam/start 15.35
58 TestErrorSpam/status 32.7
59 TestErrorSpam/pause 20.26
60 TestErrorSpam/unpause 20.54
61 TestErrorSpam/stop 51.42
64 TestFunctional/serial/CopySyncFile 0.03
65 TestFunctional/serial/StartWithProxy 215.33
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 142.92
68 TestFunctional/serial/KubeContext 0.11
69 TestFunctional/serial/KubectlGetPods 0.19
72 TestFunctional/serial/CacheCmd/cache/add_remote 24.15
73 TestFunctional/serial/CacheCmd/cache/add_local 9.44
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.22
75 TestFunctional/serial/CacheCmd/cache/list 0.22
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 8.45
77 TestFunctional/serial/CacheCmd/cache/cache_reload 32.25
78 TestFunctional/serial/CacheCmd/cache/delete 0.44
79 TestFunctional/serial/MinikubeKubectlCmd 0.41
83 TestFunctional/serial/LogsCmd 109.75
84 TestFunctional/serial/LogsFileCmd 180.68
87 TestFunctional/parallel/ConfigCmd 1.57
96 TestFunctional/parallel/AddonsCmd 0.52
99 TestFunctional/parallel/SSHCmd 17.39
100 TestFunctional/parallel/CpCmd 50.72
102 TestFunctional/parallel/FileSync 8.25
103 TestFunctional/parallel/CertSync 49.55
109 TestFunctional/parallel/NonActiveRuntimeDisabled 8.71
111 TestFunctional/parallel/License 2
112 TestFunctional/parallel/ProfileCmd/profile_not_create 10.25
113 TestFunctional/parallel/ProfileCmd/profile_list 9.52
114 TestFunctional/parallel/ProfileCmd/profile_json_output 9.14
117 TestFunctional/parallel/Version/short 0.21
118 TestFunctional/parallel/Version/components 6.74
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
132 TestFunctional/parallel/ImageCommands/Setup 2
136 TestFunctional/parallel/UpdateContextCmd/no_changes 2.12
137 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.16
138 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.13
147 TestFunctional/parallel/ImageCommands/ImageRemove 120.25
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 60.18
150 TestFunctional/delete_echo-server_images 0.18
151 TestFunctional/delete_my-image_image 0.08
152 TestFunctional/delete_minikube_cached_images 0.07
156 TestMultiControlPlane/serial/StartCluster 649.62
157 TestMultiControlPlane/serial/DeployApp 12.44
159 TestMultiControlPlane/serial/AddWorkerNode 235.89
160 TestMultiControlPlane/serial/NodeLabels 0.16
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 24.83
162 TestMultiControlPlane/serial/CopyFile 554.14
163 TestMultiControlPlane/serial/StopSecondaryNode 67.01
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 18.99
168 TestImageBuild/serial/Setup 176.68
169 TestImageBuild/serial/NormalBuild 9.35
170 TestImageBuild/serial/BuildWithBuildArg 7.95
171 TestImageBuild/serial/BuildWithDockerIgnore 7.51
172 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.43
177 TestJSONOutput/start/Audit 0
179 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/pause/Audit 0
185 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/unpause/Audit 0
191 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/stop/Command 68.42
195 TestJSONOutput/stop/Audit 0
197 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
199 TestErrorJSONOutput 0.8
204 TestMainNoArgs 0.2
205 TestMinikubeProfile 469.81
208 TestMountStart/serial/StartWithMountFirst 140.42
209 TestMountStart/serial/VerifyMountFirst 8.49
210 TestMountStart/serial/StartWithMountSecond 139.73
211 TestMountStart/serial/VerifyMountSecond 8.41
212 TestMountStart/serial/DeleteFirst 25.09
213 TestMountStart/serial/VerifyMountPostDelete 8.51
214 TestMountStart/serial/Stop 28.39
215 TestMountStart/serial/RestartStopped 107.23
216 TestMountStart/serial/VerifyMountPostStop 8.52
219 TestMultiNode/serial/FreshStart2Nodes 395.16
220 TestMultiNode/serial/DeployApp2Nodes 9.24
222 TestMultiNode/serial/AddNode 210.03
223 TestMultiNode/serial/MultiNodeLabels 0.15
224 TestMultiNode/serial/ProfileList 10.31
225 TestMultiNode/serial/CopyFile 311.33
226 TestMultiNode/serial/StopNode 67.49
227 TestMultiNode/serial/StartAfterStop 170.68
233 TestPreload 482.43
234 TestScheduledStopWindows 302.2
245 TestNoKubernetes/serial/StartNoK8sWithVersion 0.32
253 TestPause/serial/Start 182.06
255 TestPause/serial/SecondStartNoReconfiguration 351.95
258 TestPause/serial/Pause 7.26
259 TestPause/serial/VerifyStatus 10.84
260 TestPause/serial/Unpause 7.06
261 TestPause/serial/PauseAgain 7.07
262 TestPause/serial/DeletePaused 43.66
263 TestPause/serial/VerifyDeletedResources 21.86
x
+
TestDownloadOnly/v1.20.0/json-events (17.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-884300 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-884300 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (17.1267517s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (17.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-884300
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-884300: exit status 85 (452.4695ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-884300 | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:31 UTC |          |
	|         | -p download-only-884300        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 17:31:40
	Running on machine: minikube5
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 17:31:40.927518    5504 out.go:345] Setting OutFile to fd 620 ...
	I0910 17:31:40.977168    5504 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:31:40.977246    5504 out.go:358] Setting ErrFile to fd 624...
	I0910 17:31:40.977246    5504 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0910 17:31:40.987851    5504 root.go:314] Error reading config file at C:\Users\jenkins.minikube5\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube5\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0910 17:31:40.996298    5504 out.go:352] Setting JSON to true
	I0910 17:31:40.999237    5504 start.go:129] hostinfo: {"hostname":"minikube5","uptime":100764,"bootTime":1725888736,"procs":178,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4842 Build 19045.4842","kernelVersion":"10.0.19045.4842 Build 19045.4842","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0910 17:31:40.999237    5504 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 17:31:41.003929    5504 out.go:97] [download-only-884300] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	I0910 17:31:41.005098    5504 notify.go:220] Checking for updates...
	W0910 17:31:41.005098    5504 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0910 17:31:41.008720    5504 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 17:31:41.012796    5504 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0910 17:31:41.015379    5504 out.go:169] MINIKUBE_LOCATION=19598
	I0910 17:31:41.017848    5504 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0910 17:31:41.022174    5504 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0910 17:31:41.022796    5504 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:31:46.082734    5504 out.go:97] Using the hyperv driver based on user configuration
	I0910 17:31:46.082734    5504 start.go:297] selected driver: hyperv
	I0910 17:31:46.082734    5504 start.go:901] validating driver "hyperv" against <nil>
	I0910 17:31:46.083311    5504 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 17:31:46.130490    5504 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0910 17:31:46.131766    5504 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 17:31:46.132294    5504 cni.go:84] Creating CNI manager for ""
	I0910 17:31:46.132387    5504 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0910 17:31:46.132592    5504 start.go:340] cluster config:
	{Name:download-only-884300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-884300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:31:46.133871    5504 iso.go:125] acquiring lock: {Name:mkb663b978f90cccd76348bddb3ea027f6ac4252 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:31:46.136817    5504 out.go:97] Downloading VM boot image ...
	I0910 17:31:46.137184    5504 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.34.0-1725912912-19598-amd64.iso
	I0910 17:31:50.723309    5504 out.go:97] Starting "download-only-884300" primary control-plane node in "download-only-884300" cluster
	I0910 17:31:50.723309    5504 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0910 17:31:50.772831    5504 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0910 17:31:50.773574    5504 cache.go:56] Caching tarball of preloaded images
	I0910 17:31:50.774411    5504 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0910 17:31:50.777280    5504 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0910 17:31:50.777371    5504 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0910 17:31:50.851089    5504 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0910 17:31:54.965039    5504 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0910 17:31:54.966669    5504 preload.go:254] verifying checksum of C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0910 17:31:55.878948    5504 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0910 17:31:55.879410    5504 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-884300\config.json ...
	I0910 17:31:55.880122    5504 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-884300\config.json: {Name:mk308fd26118eb02c1356cddbcda782efab5f310 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:31:55.881519    5504 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0910 17:31:55.884094    5504 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-884300 host does not exist
	  To start a cluster, run: "minikube start -p download-only-884300"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-884300
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (10.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-893900 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-893900 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=hyperv: (10.2638481s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (10.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-893900
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-893900: exit status 85 (359.9823ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-884300 | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:31 UTC |                     |
	|         | -p download-only-884300        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:31 UTC | 10 Sep 24 17:31 UTC |
	| delete  | -p download-only-884300        | download-only-884300 | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:31 UTC | 10 Sep 24 17:31 UTC |
	| start   | -o=json --download-only        | download-only-893900 | minikube5\jenkins | v1.34.0 | 10 Sep 24 17:31 UTC |                     |
	|         | -p download-only-893900        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 17:31:59
	Running on machine: minikube5
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 17:31:59.849833     300 out.go:345] Setting OutFile to fd 744 ...
	I0910 17:31:59.897407     300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:31:59.897407     300 out.go:358] Setting ErrFile to fd 748...
	I0910 17:31:59.897407     300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:31:59.917737     300 out.go:352] Setting JSON to true
	I0910 17:31:59.919625     300 start.go:129] hostinfo: {"hostname":"minikube5","uptime":100783,"bootTime":1725888736,"procs":178,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4842 Build 19045.4842","kernelVersion":"10.0.19045.4842 Build 19045.4842","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0910 17:31:59.920562     300 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 17:31:59.926563     300 out.go:97] [download-only-893900] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	I0910 17:31:59.927686     300 notify.go:220] Checking for updates...
	I0910 17:31:59.929619     300 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 17:31:59.931266     300 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0910 17:31:59.934454     300 out.go:169] MINIKUBE_LOCATION=19598
	I0910 17:31:59.936599     300 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0910 17:31:59.943004     300 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0910 17:31:59.943004     300 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:32:04.885789     300 out.go:97] Using the hyperv driver based on user configuration
	I0910 17:32:04.887051     300 start.go:297] selected driver: hyperv
	I0910 17:32:04.887215     300 start.go:901] validating driver "hyperv" against <nil>
	I0910 17:32:04.887693     300 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 17:32:04.927603     300 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0910 17:32:04.928599     300 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 17:32:04.928599     300 cni.go:84] Creating CNI manager for ""
	I0910 17:32:04.928599     300 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 17:32:04.928599     300 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 17:32:04.928599     300 start.go:340] cluster config:
	{Name:download-only-893900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-893900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:32:04.928599     300 iso.go:125] acquiring lock: {Name:mkb663b978f90cccd76348bddb3ea027f6ac4252 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:32:04.932380     300 out.go:97] Starting "download-only-893900" primary control-plane node in "download-only-893900" cluster
	I0910 17:32:04.932380     300 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 17:32:04.975159     300 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0910 17:32:04.975159     300 cache.go:56] Caching tarball of preloaded images
	I0910 17:32:04.975629     300 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 17:32:04.978733     300 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0910 17:32:04.978733     300 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0910 17:32:05.044255     300 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4?checksum=md5:2dd98f97b896d7a4f012ee403b477cc8 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0910 17:32:08.142539     300 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0910 17:32:08.143351     300 preload.go:254] verifying checksum of C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0910 17:32:08.957631     300 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 17:32:08.958159     300 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-893900\config.json ...
	I0910 17:32:08.958704     300 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-893900\config.json: {Name:mk9c7625c0dad9377d619d6ded69327229e55268 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:32:08.959529     300 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 17:32:08.959906     300 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\windows\amd64\v1.31.0/kubectl.exe
	
	
	* The control-plane node download-only-893900 host does not exist
	  To start a cluster, run: "minikube start -p download-only-893900"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-893900
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.55s)

                                                
                                    
x
+
TestBinaryMirror (6.23s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-282900 --alsologtostderr --binary-mirror http://127.0.0.1:62797 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-282900 --alsologtostderr --binary-mirror http://127.0.0.1:62797 --driver=hyperv: (5.6270406s)
helpers_test.go:175: Cleaning up "binary-mirror-282900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-282900
--- PASS: TestBinaryMirror (6.23s)

                                                
                                    
x
+
TestOffline (356.31s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-281800 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-281800 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (5m11.0692215s)
helpers_test.go:175: Cleaning up "offline-docker-281800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-281800
E0910 20:24:10.978762    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-281800: (45.2397561s)
--- PASS: TestOffline (356.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.23s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-218100
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-218100: exit status 85 (230.1156ms)

                                                
                                                
-- stdout --
	* Profile "addons-218100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-218100"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.23s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-218100
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-218100: exit status 85 (217.2163ms)

                                                
                                                
-- stdout --
	* Profile "addons-218100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-218100"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/Setup (411.04s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-218100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-218100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m51.0362171s)
--- PASS: TestAddons/Setup (411.04s)

                                                
                                    
x
+
TestAddons/serial/Volcano (60.98s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 13.0149ms
addons_test.go:905: volcano-admission stabilized in 13.0149ms
addons_test.go:897: volcano-scheduler stabilized in 13.0149ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-wl2sw" [fffa82c6-fdf0-4423-b705-4e950b8cb603] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0058556s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-dvlhc" [ce0a087f-68a7-4572-aa41-95ef5731f1dc] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.0095552s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-zl5fh" [b569f8fa-7889-47bd-ae36-d79d64075691] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.0066581s
addons_test.go:932: (dbg) Run:  kubectl --context addons-218100 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-218100 create -f testdata\vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-218100 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a15f40ff-aea1-4d24-abef-12899aaf2960] Pending
helpers_test.go:344: "test-job-nginx-0" [a15f40ff-aea1-4d24-abef-12899aaf2960] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [a15f40ff-aea1-4d24-abef-12899aaf2960] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 20.0123839s
addons_test.go:968: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-218100 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-windows-amd64.exe -p addons-218100 addons disable volcano --alsologtostderr -v=1: (23.1901705s)
--- PASS: TestAddons/serial/Volcano (60.98s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-218100 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-218100 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.29s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (59.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-218100 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-218100 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-218100 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [fd53a022-7dee-4a6d-ae4b-02d2f2a5edfa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [fd53a022-7dee-4a6d-ae4b-02d2f2a5edfa] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.0170657s
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-218100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe -p addons-218100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (8.4567923s)
addons_test.go:288: (dbg) Run:  kubectl --context addons-218100 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-218100 ip
addons_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe -p addons-218100 ip: (2.1087468s)
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 172.31.219.103
addons_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-218100 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-windows-amd64.exe -p addons-218100 addons disable ingress-dns --alsologtostderr -v=1: (13.5158412s)
addons_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-218100 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe -p addons-218100 addons disable ingress --alsologtostderr -v=1: (20.282522s)
--- PASS: TestAddons/parallel/Ingress (59.16s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (24.11s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-gdgns" [495c6cbd-c856-4086-bb25-5ad4d6862351] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0163361s
addons_test.go:851: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-218100
addons_test.go:851: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-218100: (19.0908173s)
--- PASS: TestAddons/parallel/InspektorGadget (24.11s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (18.69s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 7.7213ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-8fd2v" [87c58652-e39f-47e5-a59b-098bd940d727] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0171162s
addons_test.go:417: (dbg) Run:  kubectl --context addons-218100 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-218100 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:434: (dbg) Done: out/minikube-windows-amd64.exe -p addons-218100 addons disable metrics-server --alsologtostderr -v=1: (13.5059872s)
--- PASS: TestAddons/parallel/MetricsServer (18.69s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (25.2s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 4.0549ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-ktnmd" [0dc5e187-9a58-4a0e-bef0-b7bca50b8188] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.0124547s
addons_test.go:475: (dbg) Run:  kubectl --context addons-218100 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-218100 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.0523158s)
addons_test.go:492: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-218100 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-windows-amd64.exe -p addons-218100 addons disable helm-tiller --alsologtostderr -v=1: (13.1184966s)
--- PASS: TestAddons/parallel/HelmTiller (25.20s)

                                                
                                    
x
+
TestAddons/parallel/CSI (73.76s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.4481ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-218100 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-218100 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [074dc6dc-6709-4489-afb2-664dc8c435b4] Pending
helpers_test.go:344: "task-pv-pod" [074dc6dc-6709-4489-afb2-664dc8c435b4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [074dc6dc-6709-4489-afb2-664dc8c435b4] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.0115963s
addons_test.go:590: (dbg) Run:  kubectl --context addons-218100 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-218100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-218100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-218100 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-218100 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-218100 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-218100 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3dd2da52-ffad-4459-a8bc-a0e9c07e5bc7] Pending
helpers_test.go:344: "task-pv-pod-restore" [3dd2da52-ffad-4459-a8bc-a0e9c07e5bc7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3dd2da52-ffad-4459-a8bc-a0e9c07e5bc7] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0196033s
addons_test.go:632: (dbg) Run:  kubectl --context addons-218100 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-218100 delete pod task-pv-pod-restore: (1.5258725s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-218100 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-218100 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-218100 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-windows-amd64.exe -p addons-218100 addons disable csi-hostpath-driver --alsologtostderr -v=1: (19.5462909s)
addons_test.go:648: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-218100 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-windows-amd64.exe -p addons-218100 addons disable volumesnapshots --alsologtostderr -v=1: (14.1192933s)
--- PASS: TestAddons/parallel/CSI (73.76s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (48.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-218100 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-218100 --alsologtostderr -v=1: (14.0033051s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-hdbzv" [c6fb41cb-1074-44ec-a8a2-394369f7f222] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-hdbzv" [c6fb41cb-1074-44ec-a8a2-394369f7f222] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-hdbzv" [c6fb41cb-1074-44ec-a8a2-394369f7f222] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.0176737s
addons_test.go:839: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-218100 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-windows-amd64.exe -p addons-218100 addons disable headlamp --alsologtostderr -v=1: (19.1984663s)
--- PASS: TestAddons/parallel/Headlamp (48.22s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (18.37s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-8cq74" [5955f274-0335-4016-a70d-4bf2589b5197] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0063208s
addons_test.go:870: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-218100
addons_test.go:870: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-218100: (13.3519948s)
--- PASS: TestAddons/parallel/CloudSpanner (18.37s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (80.46s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-218100 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-218100 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218100 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [cd0f9df1-e24d-45d0-b693-dd98cac42149] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [cd0f9df1-e24d-45d0-b693-dd98cac42149] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [cd0f9df1-e24d-45d0-b693-dd98cac42149] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0185588s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-218100 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-218100 ssh "cat /opt/local-path-provisioner/pvc-4de6a817-3f84-46d6-85bd-410059542afd_default_test-pvc/file1"
addons_test.go:1009: (dbg) Done: out/minikube-windows-amd64.exe -p addons-218100 ssh "cat /opt/local-path-provisioner/pvc-4de6a817-3f84-46d6-85bd-410059542afd_default_test-pvc/file1": (8.54021s)
addons_test.go:1021: (dbg) Run:  kubectl --context addons-218100 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-218100 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-218100 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-windows-amd64.exe -p addons-218100 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (58.4920182s)
--- PASS: TestAddons/parallel/LocalPath (80.46s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (18.83s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zpcgf" [b805867a-0e07-4d26-b1f5-ec0f6d63bc5e] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0082112s
addons_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-218100
addons_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-218100: (13.8201457s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (18.83s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (25.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-5q9nv" [e28d2710-3544-4f47-9323-a24c31623505] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0169534s
addons_test.go:1076: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-218100 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-windows-amd64.exe -p addons-218100 addons disable yakd --alsologtostderr -v=1: (19.2668335s)
--- PASS: TestAddons/parallel/Yakd (25.29s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (49.55s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-218100
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-218100: (38.324743s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-218100
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-218100: (4.4129123s)
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-218100
addons_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-218100: (4.152226s)
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-218100
addons_test.go:187: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-218100: (2.6637897s)
--- PASS: TestAddons/StoppedEnableDisable (49.55s)

                                                
                                    
x
+
TestErrorSpam/start (15.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 start --dry-run: (4.9940196s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 start --dry-run: (5.236305s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 start --dry-run: (5.1201439s)
--- PASS: TestErrorSpam/start (15.35s)

                                                
                                    
x
+
TestErrorSpam/status (32.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 status: (11.2445544s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 status: (10.7174841s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 status: (10.7376971s)
--- PASS: TestErrorSpam/status (32.70s)

                                                
                                    
x
+
TestErrorSpam/pause (20.26s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 pause: (6.8775603s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 pause
E0910 17:56:54.594555    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 pause: (6.6853427s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 pause: (6.6926327s)
--- PASS: TestErrorSpam/pause (20.26s)

                                                
                                    
x
+
TestErrorSpam/unpause (20.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 unpause: (6.9761601s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 unpause: (6.7835059s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 unpause: (6.782419s)
--- PASS: TestErrorSpam/unpause (20.54s)

                                                
                                    
x
+
TestErrorSpam/stop (51.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 stop: (31.6568035s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 stop: (10.1881577s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-885900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-885900 stop: (9.5734792s)
--- PASS: TestErrorSpam/stop (51.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\4724\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (215.33s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-879800 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0910 17:59:10.404512    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 17:59:38.475823    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-879800 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m35.3177048s)
--- PASS: TestFunctional/serial/StartWithProxy (215.33s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (142.92s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-879800 --alsologtostderr -v=8
E0910 18:04:10.418125    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-879800 --alsologtostderr -v=8: (2m22.916267s)
functional_test.go:663: soft start took 2m22.9182656s for "functional-879800" cluster.
--- PASS: TestFunctional/serial/SoftStart (142.92s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.11s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-879800 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (24.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 cache add registry.k8s.io/pause:3.1: (8.2036815s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 cache add registry.k8s.io/pause:3.3: (8.1171686s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 cache add registry.k8s.io/pause:latest: (7.8300663s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (24.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (9.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-879800 C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local4020939783\001
functional_test.go:1077: (dbg) Done: docker build -t minikube-local-cache-test:functional-879800 C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local4020939783\001: (1.6869198s)
functional_test.go:1089: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 cache add minikube-local-cache-test:functional-879800
functional_test.go:1089: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 cache add minikube-local-cache-test:functional-879800: (7.4345115s)
functional_test.go:1094: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 cache delete minikube-local-cache-test:functional-879800
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-879800
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (9.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 ssh sudo crictl images
functional_test.go:1124: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 ssh sudo crictl images: (8.445546s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (32.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 ssh sudo docker rmi registry.k8s.io/pause:latest: (8.3854795s)
functional_test.go:1153: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-879800 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (8.335052s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 cache reload: (7.2930032s)
functional_test.go:1163: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (8.234473s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (32.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.44s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 kubectl -- --context functional-879800 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.41s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (109.75s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 logs
E0910 18:14:10.462685    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:1236: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 logs: (1m49.7525805s)
--- PASS: TestFunctional/serial/LogsCmd (109.75s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (180.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 logs --file C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2246304708\001\logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 logs --file C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2246304708\001\logs.txt: (3m0.6759527s)
--- PASS: TestFunctional/serial/LogsFileCmd (180.68s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-879800 config get cpus: exit status 14 (221.9431ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-879800 config get cpus: exit status 14 (228.994ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (17.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 ssh "echo hello"
functional_test.go:1725: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 ssh "echo hello": (8.6430018s)
functional_test.go:1742: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 ssh "cat /etc/hostname": (8.7427598s)
--- PASS: TestFunctional/parallel/SSHCmd (17.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (50.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 cp testdata\cp-test.txt /home/docker/cp-test.txt: (7.9281089s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 ssh -n functional-879800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 ssh -n functional-879800 "sudo cat /home/docker/cp-test.txt": (9.305436s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 cp functional-879800:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd1785398597\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 cp functional-879800:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd1785398597\001\cp-test.txt: (8.9004956s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 ssh -n functional-879800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 ssh -n functional-879800 "sudo cat /home/docker/cp-test.txt": (8.6308345s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (6.7822837s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 ssh -n functional-879800 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 ssh -n functional-879800 "sudo cat /tmp/does/not/exist/cp-test.txt": (9.1689118s)
--- PASS: TestFunctional/parallel/CpCmd (50.72s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (8.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/4724/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 ssh "sudo cat /etc/test/nested/copy/4724/hosts"
functional_test.go:1931: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 ssh "sudo cat /etc/test/nested/copy/4724/hosts": (8.2533538s)
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (8.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (49.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/4724.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 ssh "sudo cat /etc/ssl/certs/4724.pem"
functional_test.go:1973: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 ssh "sudo cat /etc/ssl/certs/4724.pem": (8.1031883s)
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/4724.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 ssh "sudo cat /usr/share/ca-certificates/4724.pem"
functional_test.go:1973: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 ssh "sudo cat /usr/share/ca-certificates/4724.pem": (8.3294111s)
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 ssh "sudo cat /etc/ssl/certs/51391683.0": (8.3500397s)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/47242.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 ssh "sudo cat /etc/ssl/certs/47242.pem"
functional_test.go:2000: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 ssh "sudo cat /etc/ssl/certs/47242.pem": (8.2148158s)
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/47242.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 ssh "sudo cat /usr/share/ca-certificates/47242.pem"
functional_test.go:2000: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 ssh "sudo cat /usr/share/ca-certificates/47242.pem": (8.3603782s)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (8.1925425s)
--- PASS: TestFunctional/parallel/CertSync (49.55s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (8.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-879800 ssh "sudo systemctl is-active crio": exit status 1 (8.7138073s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (8.71s)

                                                
                                    
x
+
TestFunctional/parallel/License (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2288: (dbg) Done: out/minikube-windows-amd64.exe license: (1.9891768s)
--- PASS: TestFunctional/parallel/License (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (10.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1275: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.8951666s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (10.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (9.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1310: (dbg) Done: out/minikube-windows-amd64.exe profile list: (9.3129394s)
functional_test.go:1315: Took "9.3132343s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1329: Took "209.4684ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (9.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (9.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1361: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (8.9297555s)
functional_test.go:1366: Took "8.9299376s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1379: Took "211.0178ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (9.14s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 version --short
--- PASS: TestFunctional/parallel/Version/short (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (6.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 version -o=json --components: (6.7260259s)
--- PASS: TestFunctional/parallel/Version/components (6.74s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-879800 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-879800 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 9556: OpenProcess: The parameter is incorrect.
helpers_test.go:502: unable to terminate pid 10416: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.9072785s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-879800
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 update-context --alsologtostderr -v=2: (2.1157373s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 update-context --alsologtostderr -v=2: (2.1567539s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 update-context --alsologtostderr -v=2: (2.1247979s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (120.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 image rm kicbase/echo-server:functional-879800 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 image rm kicbase/echo-server:functional-879800 --alsologtostderr: (1m0.1491724s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 image ls
E0910 18:27:13.986231    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 image ls: (1m0.0964005s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (120.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (60.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-879800
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-879800 image save --daemon kicbase/echo-server:functional-879800 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-windows-amd64.exe -p functional-879800 image save --daemon kicbase/echo-server:functional-879800 --alsologtostderr: (59.9849636s)
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-879800
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (60.18s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-879800
--- PASS: TestFunctional/delete_echo-server_images (0.18s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-879800
--- PASS: TestFunctional/delete_my-image_image (0.08s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-879800
--- PASS: TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (649.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-301400 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0910 18:32:59.627737    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:32:59.658752    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:32:59.689628    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:32:59.736809    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:32:59.798790    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:32:59.908075    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:33:00.096506    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:33:00.444069    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:33:01.106293    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:33:02.407031    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:33:04.992714    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:33:10.135370    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:33:20.404592    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:33:40.912642    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:34:10.539357    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:34:21.894356    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:35:43.841902    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:37:59.648668    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:38:27.721326    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:39:10.553006    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:42:59.662442    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-301400 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (10m17.4784635s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr: (32.139724s)
--- PASS: TestMultiControlPlane/serial/StartCluster (649.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (12.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-301400 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-301400 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-301400 -- rollout status deployment/busybox: (4.8135313s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-301400 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-301400 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-301400 -- exec busybox-7dff88458-d2tcx -- nslookup kubernetes.io
E0910 18:43:54.083489    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-301400 -- exec busybox-7dff88458-d2tcx -- nslookup kubernetes.io: (1.7158936s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-301400 -- exec busybox-7dff88458-lnwzg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-301400 -- exec busybox-7dff88458-lnwzg -- nslookup kubernetes.io: (1.5153453s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-301400 -- exec busybox-7dff88458-wbkmw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-301400 -- exec busybox-7dff88458-d2tcx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-301400 -- exec busybox-7dff88458-lnwzg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-301400 -- exec busybox-7dff88458-wbkmw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-301400 -- exec busybox-7dff88458-d2tcx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-301400 -- exec busybox-7dff88458-lnwzg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-301400 -- exec busybox-7dff88458-wbkmw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (12.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (235.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-301400 -v=7 --alsologtostderr
E0910 18:47:59.681471    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-301400 -v=7 --alsologtostderr: (3m13.3792966s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr: (42.5073792s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (235.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-301400 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (24.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0910 18:49:10.593815    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 18:49:23.151703    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (24.8329683s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (24.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (554.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 status --output json -v=7 --alsologtostderr: (42.4805498s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 cp testdata\cp-test.txt ha-301400:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 cp testdata\cp-test.txt ha-301400:/home/docker/cp-test.txt: (8.5155378s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400 "sudo cat /home/docker/cp-test.txt": (8.5000229s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2953471519\001\cp-test_ha-301400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2953471519\001\cp-test_ha-301400.txt: (8.4282354s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400 "sudo cat /home/docker/cp-test.txt": (8.4532876s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400:/home/docker/cp-test.txt ha-301400-m02:/home/docker/cp-test_ha-301400_ha-301400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400:/home/docker/cp-test.txt ha-301400-m02:/home/docker/cp-test_ha-301400_ha-301400-m02.txt: (14.6871679s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400 "sudo cat /home/docker/cp-test.txt": (8.346453s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m02 "sudo cat /home/docker/cp-test_ha-301400_ha-301400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m02 "sudo cat /home/docker/cp-test_ha-301400_ha-301400-m02.txt": (8.4439455s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400:/home/docker/cp-test.txt ha-301400-m03:/home/docker/cp-test_ha-301400_ha-301400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400:/home/docker/cp-test.txt ha-301400-m03:/home/docker/cp-test_ha-301400_ha-301400-m03.txt: (14.828705s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400 "sudo cat /home/docker/cp-test.txt": (8.5006214s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m03 "sudo cat /home/docker/cp-test_ha-301400_ha-301400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m03 "sudo cat /home/docker/cp-test_ha-301400_ha-301400-m03.txt": (8.4229011s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400:/home/docker/cp-test.txt ha-301400-m04:/home/docker/cp-test_ha-301400_ha-301400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400:/home/docker/cp-test.txt ha-301400-m04:/home/docker/cp-test_ha-301400_ha-301400-m04.txt: (14.5992112s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400 "sudo cat /home/docker/cp-test.txt": (8.3543819s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m04 "sudo cat /home/docker/cp-test_ha-301400_ha-301400-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m04 "sudo cat /home/docker/cp-test_ha-301400_ha-301400-m04.txt": (8.3019411s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 cp testdata\cp-test.txt ha-301400-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 cp testdata\cp-test.txt ha-301400-m02:/home/docker/cp-test.txt: (8.3613739s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m02 "sudo cat /home/docker/cp-test.txt": (8.4172199s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2953471519\001\cp-test_ha-301400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2953471519\001\cp-test_ha-301400-m02.txt: (8.3828636s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m02 "sudo cat /home/docker/cp-test.txt": (8.377742s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m02:/home/docker/cp-test.txt ha-301400:/home/docker/cp-test_ha-301400-m02_ha-301400.txt
E0910 18:52:59.699499    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m02:/home/docker/cp-test.txt ha-301400:/home/docker/cp-test_ha-301400-m02_ha-301400.txt: (14.8203725s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m02 "sudo cat /home/docker/cp-test.txt": (8.3721372s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400 "sudo cat /home/docker/cp-test_ha-301400-m02_ha-301400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400 "sudo cat /home/docker/cp-test_ha-301400-m02_ha-301400.txt": (8.3512802s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m02:/home/docker/cp-test.txt ha-301400-m03:/home/docker/cp-test_ha-301400-m02_ha-301400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m02:/home/docker/cp-test.txt ha-301400-m03:/home/docker/cp-test_ha-301400-m02_ha-301400-m03.txt: (14.7005684s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m02 "sudo cat /home/docker/cp-test.txt": (8.37647s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m03 "sudo cat /home/docker/cp-test_ha-301400-m02_ha-301400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m03 "sudo cat /home/docker/cp-test_ha-301400-m02_ha-301400-m03.txt": (8.402686s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m02:/home/docker/cp-test.txt ha-301400-m04:/home/docker/cp-test_ha-301400-m02_ha-301400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m02:/home/docker/cp-test.txt ha-301400-m04:/home/docker/cp-test_ha-301400-m02_ha-301400-m04.txt: (14.6619942s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m02 "sudo cat /home/docker/cp-test.txt"
E0910 18:54:10.613761    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m02 "sudo cat /home/docker/cp-test.txt": (8.431333s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m04 "sudo cat /home/docker/cp-test_ha-301400-m02_ha-301400-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m04 "sudo cat /home/docker/cp-test_ha-301400-m02_ha-301400-m04.txt": (8.3814941s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 cp testdata\cp-test.txt ha-301400-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 cp testdata\cp-test.txt ha-301400-m03:/home/docker/cp-test.txt: (8.3553677s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m03 "sudo cat /home/docker/cp-test.txt": (8.3100054s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2953471519\001\cp-test_ha-301400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2953471519\001\cp-test_ha-301400-m03.txt: (8.3489493s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m03 "sudo cat /home/docker/cp-test.txt": (8.3619103s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m03:/home/docker/cp-test.txt ha-301400:/home/docker/cp-test_ha-301400-m03_ha-301400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m03:/home/docker/cp-test.txt ha-301400:/home/docker/cp-test_ha-301400-m03_ha-301400.txt: (14.6833026s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m03 "sudo cat /home/docker/cp-test.txt": (8.4408753s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400 "sudo cat /home/docker/cp-test_ha-301400-m03_ha-301400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400 "sudo cat /home/docker/cp-test_ha-301400-m03_ha-301400.txt": (8.4033921s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m03:/home/docker/cp-test.txt ha-301400-m02:/home/docker/cp-test_ha-301400-m03_ha-301400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m03:/home/docker/cp-test.txt ha-301400-m02:/home/docker/cp-test_ha-301400-m03_ha-301400-m02.txt: (14.5229979s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m03 "sudo cat /home/docker/cp-test.txt": (8.4044836s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m02 "sudo cat /home/docker/cp-test_ha-301400-m03_ha-301400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m02 "sudo cat /home/docker/cp-test_ha-301400-m03_ha-301400-m02.txt": (8.3325212s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m03:/home/docker/cp-test.txt ha-301400-m04:/home/docker/cp-test_ha-301400-m03_ha-301400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m03:/home/docker/cp-test.txt ha-301400-m04:/home/docker/cp-test_ha-301400-m03_ha-301400-m04.txt: (14.8733631s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m03 "sudo cat /home/docker/cp-test.txt": (8.3627661s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m04 "sudo cat /home/docker/cp-test_ha-301400-m03_ha-301400-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m04 "sudo cat /home/docker/cp-test_ha-301400-m03_ha-301400-m04.txt": (8.4500167s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 cp testdata\cp-test.txt ha-301400-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 cp testdata\cp-test.txt ha-301400-m04:/home/docker/cp-test.txt: (8.3499936s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m04 "sudo cat /home/docker/cp-test.txt": (8.4130948s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2953471519\001\cp-test_ha-301400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2953471519\001\cp-test_ha-301400-m04.txt: (8.3627231s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m04 "sudo cat /home/docker/cp-test.txt": (8.3646894s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m04:/home/docker/cp-test.txt ha-301400:/home/docker/cp-test_ha-301400-m04_ha-301400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m04:/home/docker/cp-test.txt ha-301400:/home/docker/cp-test_ha-301400-m04_ha-301400.txt: (14.5354963s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m04 "sudo cat /home/docker/cp-test.txt": (8.2841814s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400 "sudo cat /home/docker/cp-test_ha-301400-m04_ha-301400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400 "sudo cat /home/docker/cp-test_ha-301400-m04_ha-301400.txt": (8.4223124s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m04:/home/docker/cp-test.txt ha-301400-m02:/home/docker/cp-test_ha-301400-m04_ha-301400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m04:/home/docker/cp-test.txt ha-301400-m02:/home/docker/cp-test_ha-301400-m04_ha-301400-m02.txt: (14.6460797s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m04 "sudo cat /home/docker/cp-test.txt": (8.3191375s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m02 "sudo cat /home/docker/cp-test_ha-301400-m04_ha-301400-m02.txt"
E0910 18:57:59.721921    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m02 "sudo cat /home/docker/cp-test_ha-301400-m04_ha-301400-m02.txt": (8.3803946s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m04:/home/docker/cp-test.txt ha-301400-m03:/home/docker/cp-test_ha-301400-m04_ha-301400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 cp ha-301400-m04:/home/docker/cp-test.txt ha-301400-m03:/home/docker/cp-test_ha-301400-m04_ha-301400-m03.txt: (14.614387s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m04 "sudo cat /home/docker/cp-test.txt": (8.3348092s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m03 "sudo cat /home/docker/cp-test_ha-301400-m04_ha-301400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 ssh -n ha-301400-m03 "sudo cat /home/docker/cp-test_ha-301400-m04_ha-301400-m03.txt": (8.3281937s)
--- PASS: TestMultiControlPlane/serial/CopyFile (554.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (67.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 node stop m02 -v=7 --alsologtostderr
E0910 18:59:10.643254    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-301400 node stop m02 -v=7 --alsologtostderr: (32.963177s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-301400 status -v=7 --alsologtostderr: exit status 7 (34.0454027s)

                                                
                                                
-- stdout --
	ha-301400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-301400-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-301400-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-301400-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:59:12.312558    8400 out.go:345] Setting OutFile to fd 1632 ...
	I0910 18:59:12.364519    8400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:59:12.364519    8400 out.go:358] Setting ErrFile to fd 1628...
	I0910 18:59:12.364519    8400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:59:12.383630    8400 out.go:352] Setting JSON to false
	I0910 18:59:12.383630    8400 mustload.go:65] Loading cluster: ha-301400
	I0910 18:59:12.383630    8400 notify.go:220] Checking for updates...
	I0910 18:59:12.384710    8400 config.go:182] Loaded profile config "ha-301400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:59:12.384790    8400 status.go:255] checking status of ha-301400 ...
	I0910 18:59:12.385053    8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:59:14.351137    8400 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:59:14.351137    8400 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:59:14.351137    8400 status.go:330] ha-301400 host status = "Running" (err=<nil>)
	I0910 18:59:14.351137    8400 host.go:66] Checking if "ha-301400" exists ...
	I0910 18:59:14.351747    8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:59:16.287174    8400 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:59:16.287408    8400 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:59:16.287408    8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:59:18.601228    8400 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:59:18.601228    8400 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:59:18.601708    8400 host.go:66] Checking if "ha-301400" exists ...
	I0910 18:59:18.611109    8400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 18:59:18.611109    8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400 ).state
	I0910 18:59:20.524047    8400 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:59:20.524996    8400 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:59:20.524996    8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400 ).networkadapters[0]).ipaddresses[0]
	I0910 18:59:22.823090    8400 main.go:141] libmachine: [stdout =====>] : 172.31.216.168
	
	I0910 18:59:22.823090    8400 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:59:22.823090    8400 sshutil.go:53] new ssh client: &{IP:172.31.216.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400\id_rsa Username:docker}
	I0910 18:59:22.932687    8400 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.3212865s)
	I0910 18:59:22.948964    8400 ssh_runner.go:195] Run: systemctl --version
	I0910 18:59:22.969370    8400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:59:22.996242    8400 kubeconfig.go:125] found "ha-301400" server: "https://172.31.223.254:8443"
	I0910 18:59:22.996242    8400 api_server.go:166] Checking apiserver status ...
	I0910 18:59:23.006120    8400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:23.039049    8400 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2230/cgroup
	W0910 18:59:23.056857    8400 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2230/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:59:23.065628    8400 ssh_runner.go:195] Run: ls
	I0910 18:59:23.072505    8400 api_server.go:253] Checking apiserver healthz at https://172.31.223.254:8443/healthz ...
	I0910 18:59:23.086425    8400 api_server.go:279] https://172.31.223.254:8443/healthz returned 200:
	ok
	I0910 18:59:23.086425    8400 status.go:422] ha-301400 apiserver status = Running (err=<nil>)
	I0910 18:59:23.086425    8400 status.go:257] ha-301400 status: &{Name:ha-301400 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 18:59:23.087089    8400 status.go:255] checking status of ha-301400-m02 ...
	I0910 18:59:23.087426    8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m02 ).state
	I0910 18:59:25.004600    8400 main.go:141] libmachine: [stdout =====>] : Off
	
	I0910 18:59:25.004676    8400 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:59:25.004676    8400 status.go:330] ha-301400-m02 host status = "Stopped" (err=<nil>)
	I0910 18:59:25.004676    8400 status.go:343] host is not running, skipping remaining checks
	I0910 18:59:25.004676    8400 status.go:257] ha-301400-m02 status: &{Name:ha-301400-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 18:59:25.004791    8400 status.go:255] checking status of ha-301400-m03 ...
	I0910 18:59:25.005589    8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:59:26.960380    8400 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:59:26.960453    8400 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:59:26.960453    8400 status.go:330] ha-301400-m03 host status = "Running" (err=<nil>)
	I0910 18:59:26.960453    8400 host.go:66] Checking if "ha-301400-m03" exists ...
	I0910 18:59:26.961196    8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:59:28.875779    8400 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:59:28.875779    8400 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:59:28.875894    8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:59:31.173738    8400 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:59:31.173969    8400 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:59:31.173969    8400 host.go:66] Checking if "ha-301400-m03" exists ...
	I0910 18:59:31.182971    8400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 18:59:31.182971    8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m03 ).state
	I0910 18:59:33.177502    8400 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:59:33.177559    8400 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:59:33.177559    8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m03 ).networkadapters[0]).ipaddresses[0]
	I0910 18:59:35.486280    8400 main.go:141] libmachine: [stdout =====>] : 172.31.217.146
	
	I0910 18:59:35.486653    8400 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:59:35.486706    8400 sshutil.go:53] new ssh client: &{IP:172.31.217.146 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m03\id_rsa Username:docker}
	I0910 18:59:35.583925    8400 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.4006579s)
	I0910 18:59:35.593100    8400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:59:35.618157    8400 kubeconfig.go:125] found "ha-301400" server: "https://172.31.223.254:8443"
	I0910 18:59:35.618157    8400 api_server.go:166] Checking apiserver status ...
	I0910 18:59:35.626814    8400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:35.665357    8400 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2164/cgroup
	W0910 18:59:35.685404    8400 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2164/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:59:35.694842    8400 ssh_runner.go:195] Run: ls
	I0910 18:59:35.701822    8400 api_server.go:253] Checking apiserver healthz at https://172.31.223.254:8443/healthz ...
	I0910 18:59:35.709835    8400 api_server.go:279] https://172.31.223.254:8443/healthz returned 200:
	ok
	I0910 18:59:35.710230    8400 status.go:422] ha-301400-m03 apiserver status = Running (err=<nil>)
	I0910 18:59:35.710267    8400 status.go:257] ha-301400-m03 status: &{Name:ha-301400-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 18:59:35.710267    8400 status.go:255] checking status of ha-301400-m04 ...
	I0910 18:59:35.710604    8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m04 ).state
	I0910 18:59:37.639248    8400 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:59:37.639248    8400 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:59:37.639248    8400 status.go:330] ha-301400-m04 host status = "Running" (err=<nil>)
	I0910 18:59:37.639248    8400 host.go:66] Checking if "ha-301400-m04" exists ...
	I0910 18:59:37.640030    8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m04 ).state
	I0910 18:59:39.568923    8400 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:59:39.568923    8400 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:59:39.568923    8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m04 ).networkadapters[0]).ipaddresses[0]
	I0910 18:59:41.871136    8400 main.go:141] libmachine: [stdout =====>] : 172.31.215.214
	
	I0910 18:59:41.872235    8400 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:59:41.872235    8400 host.go:66] Checking if "ha-301400-m04" exists ...
	I0910 18:59:41.881765    8400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 18:59:41.881765    8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-301400-m04 ).state
	I0910 18:59:43.799536    8400 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 18:59:43.799849    8400 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:59:43.799849    8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-301400-m04 ).networkadapters[0]).ipaddresses[0]
	I0910 18:59:46.110097    8400 main.go:141] libmachine: [stdout =====>] : 172.31.215.214
	
	I0910 18:59:46.110097    8400 main.go:141] libmachine: [stderr =====>] : 
	I0910 18:59:46.110097    8400 sshutil.go:53] new ssh client: &{IP:172.31.215.214 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-301400-m04\id_rsa Username:docker}
	I0910 18:59:46.206081    8400 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.3240241s)
	I0910 18:59:46.214508    8400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:59:46.236894    8400 status.go:257] ha-301400-m04 status: &{Name:ha-301400-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (67.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (18.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (18.9897924s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (18.99s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (176.68s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-517100 --driver=hyperv
E0910 19:07:59.766332    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 19:09:10.673361    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-517100 --driver=hyperv: (2m56.6787609s)
--- PASS: TestImageBuild/serial/Setup (176.68s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.35s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-517100
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-517100: (9.3442906s)
--- PASS: TestImageBuild/serial/NormalBuild (9.35s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (7.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-517100
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-517100: (7.9524813s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (7.95s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-517100
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-517100: (7.5080751s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.51s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.43s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-517100
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-517100: (7.4274588s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.43s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (68.42s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-867500 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-867500 --output=json --user=testUser: (1m8.4106414s)
--- PASS: TestJSONOutput/stop/Command (68.42s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.8s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-754000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-754000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (222.3613ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7b6befa8-b3ca-4381-83ef-41f3eadefd84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-754000] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3cd58cef-bf62-4a84-870a-bb5987f2fc69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube5\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"8b287175-8cdf-45a0-9bf8-1a454cc30852","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"db569c05-a3c0-49e2-a5a3-5b11fc0d457b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"d0cce9b1-689a-4962-a872-e32b5e9d19f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19598"}}
	{"specversion":"1.0","id":"116980ce-b1c2-4afd-a0c9-57a1d1cc61de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"283fb9bf-1222-493f-a5fe-58dd3c8147b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-754000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-754000
--- PASS: TestErrorJSONOutput (0.80s)

                                                
                                    
x
+
TestMainNoArgs (0.2s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.20s)

                                                
                                    
x
+
TestMinikubeProfile (469.81s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-085100 --driver=hyperv
E0910 19:17:14.256665    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 19:17:59.796736    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 19:19:10.718455    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-085100 --driver=hyperv: (2m55.3350993s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-085100 --driver=hyperv
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-085100 --driver=hyperv: (2m58.2631395s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-085100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
E0910 19:22:43.322863    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.0947878s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-085100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
E0910 19:22:59.825328    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (18.8973453s)
helpers_test.go:175: Cleaning up "second-085100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-085100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-085100: (38.9085753s)
helpers_test.go:175: Cleaning up "first-085100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-085100
E0910 19:24:10.746954    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-085100: (38.7853847s)
--- PASS: TestMinikubeProfile (469.81s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (140.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-038400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-038400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m19.4147257s)
--- PASS: TestMountStart/serial/StartWithMountFirst (140.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (8.49s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-038400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-038400 ssh -- ls /minikube-host: (8.4869817s)
--- PASS: TestMountStart/serial/VerifyMountFirst (8.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (139.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-038400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0910 19:27:59.839758    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 19:29:10.766272    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-038400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m18.729889s)
--- PASS: TestMountStart/serial/StartWithMountSecond (139.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (8.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-038400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-038400 ssh -- ls /minikube-host: (8.4072687s)
--- PASS: TestMountStart/serial/VerifyMountSecond (8.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (25.09s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-038400 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-038400 --alsologtostderr -v=5: (25.0940994s)
--- PASS: TestMountStart/serial/DeleteFirst (25.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (8.51s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-038400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-038400 ssh -- ls /minikube-host: (8.5103184s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (8.51s)

                                                
                                    
x
+
TestMountStart/serial/Stop (28.39s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-038400
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-038400: (28.3908665s)
--- PASS: TestMountStart/serial/Stop (28.39s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (107.23s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-038400
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-038400: (1m46.2198806s)
--- PASS: TestMountStart/serial/RestartStopped (107.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (8.52s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-038400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-038400 ssh -- ls /minikube-host: (8.5216779s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (8.52s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (395.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-629100 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0910 19:32:59.858633    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 19:33:54.340432    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 19:34:10.777762    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 19:37:59.882933    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-629100 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m14.1750787s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 status --alsologtostderr
E0910 19:39:10.806021    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 19:39:23.406261    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 status --alsologtostderr: (20.9829409s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (395.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-629100 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-629100 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-629100 -- rollout status deployment/busybox: (4.2871549s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-629100 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-629100 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-629100 -- exec busybox-7dff88458-7c4qt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-629100 -- exec busybox-7dff88458-7c4qt -- nslookup kubernetes.io: (1.702454s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-629100 -- exec busybox-7dff88458-lzs87 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-629100 -- exec busybox-7dff88458-7c4qt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-629100 -- exec busybox-7dff88458-lzs87 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-629100 -- exec busybox-7dff88458-7c4qt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-629100 -- exec busybox-7dff88458-lzs87 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.24s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (210.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-629100 -v 3 --alsologtostderr
E0910 19:42:59.911318    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-629100 -v 3 --alsologtostderr: (3m0.0362705s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 status --alsologtostderr: (29.9916705s)
--- PASS: TestMultiNode/serial/AddNode (210.03s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-629100 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (10.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (10.2976067s)
--- PASS: TestMultiNode/serial/ProfileList (10.31s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (311.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 status --output json --alsologtostderr
E0910 19:44:10.816624    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 status --output json --alsologtostderr: (30.3024139s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 cp testdata\cp-test.txt multinode-629100:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 cp testdata\cp-test.txt multinode-629100:/home/docker/cp-test.txt: (8.0391893s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100 "sudo cat /home/docker/cp-test.txt": (7.9285429s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 cp multinode-629100:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile1125635233\001\cp-test_multinode-629100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 cp multinode-629100:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile1125635233\001\cp-test_multinode-629100.txt: (7.8817246s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100 "sudo cat /home/docker/cp-test.txt": (8.0427556s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 cp multinode-629100:/home/docker/cp-test.txt multinode-629100-m02:/home/docker/cp-test_multinode-629100_multinode-629100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 cp multinode-629100:/home/docker/cp-test.txt multinode-629100-m02:/home/docker/cp-test_multinode-629100_multinode-629100-m02.txt: (14.1072455s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100 "sudo cat /home/docker/cp-test.txt": (8.016322s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m02 "sudo cat /home/docker/cp-test_multinode-629100_multinode-629100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m02 "sudo cat /home/docker/cp-test_multinode-629100_multinode-629100-m02.txt": (8.0246502s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 cp multinode-629100:/home/docker/cp-test.txt multinode-629100-m03:/home/docker/cp-test_multinode-629100_multinode-629100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 cp multinode-629100:/home/docker/cp-test.txt multinode-629100-m03:/home/docker/cp-test_multinode-629100_multinode-629100-m03.txt: (14.1274216s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100 "sudo cat /home/docker/cp-test.txt": (8.1510138s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m03 "sudo cat /home/docker/cp-test_multinode-629100_multinode-629100-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m03 "sudo cat /home/docker/cp-test_multinode-629100_multinode-629100-m03.txt": (8.3275539s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 cp testdata\cp-test.txt multinode-629100-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 cp testdata\cp-test.txt multinode-629100-m02:/home/docker/cp-test.txt: (8.0707332s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m02 "sudo cat /home/docker/cp-test.txt": (8.1839304s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 cp multinode-629100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile1125635233\001\cp-test_multinode-629100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 cp multinode-629100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile1125635233\001\cp-test_multinode-629100-m02.txt: (8.1836539s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m02 "sudo cat /home/docker/cp-test.txt": (8.1609766s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 cp multinode-629100-m02:/home/docker/cp-test.txt multinode-629100:/home/docker/cp-test_multinode-629100-m02_multinode-629100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 cp multinode-629100-m02:/home/docker/cp-test.txt multinode-629100:/home/docker/cp-test_multinode-629100-m02_multinode-629100.txt: (14.2252951s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m02 "sudo cat /home/docker/cp-test.txt": (8.1725286s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100 "sudo cat /home/docker/cp-test_multinode-629100-m02_multinode-629100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100 "sudo cat /home/docker/cp-test_multinode-629100-m02_multinode-629100.txt": (8.2313147s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 cp multinode-629100-m02:/home/docker/cp-test.txt multinode-629100-m03:/home/docker/cp-test_multinode-629100-m02_multinode-629100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 cp multinode-629100-m02:/home/docker/cp-test.txt multinode-629100-m03:/home/docker/cp-test_multinode-629100-m02_multinode-629100-m03.txt: (14.265772s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m02 "sudo cat /home/docker/cp-test.txt": (8.1449631s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m03 "sudo cat /home/docker/cp-test_multinode-629100-m02_multinode-629100-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m03 "sudo cat /home/docker/cp-test_multinode-629100-m02_multinode-629100-m03.txt": (8.2644865s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 cp testdata\cp-test.txt multinode-629100-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 cp testdata\cp-test.txt multinode-629100-m03:/home/docker/cp-test.txt: (8.2422359s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m03 "sudo cat /home/docker/cp-test.txt"
E0910 19:47:59.928471    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m03 "sudo cat /home/docker/cp-test.txt": (8.4479897s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 cp multinode-629100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile1125635233\001\cp-test_multinode-629100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 cp multinode-629100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile1125635233\001\cp-test_multinode-629100-m03.txt: (8.1275968s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m03 "sudo cat /home/docker/cp-test.txt": (8.1650377s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 cp multinode-629100-m03:/home/docker/cp-test.txt multinode-629100:/home/docker/cp-test_multinode-629100-m03_multinode-629100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 cp multinode-629100-m03:/home/docker/cp-test.txt multinode-629100:/home/docker/cp-test_multinode-629100-m03_multinode-629100.txt: (14.4446285s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m03 "sudo cat /home/docker/cp-test.txt": (8.1571806s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100 "sudo cat /home/docker/cp-test_multinode-629100-m03_multinode-629100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100 "sudo cat /home/docker/cp-test_multinode-629100-m03_multinode-629100.txt": (8.1450566s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 cp multinode-629100-m03:/home/docker/cp-test.txt multinode-629100-m02:/home/docker/cp-test_multinode-629100-m03_multinode-629100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 cp multinode-629100-m03:/home/docker/cp-test.txt multinode-629100-m02:/home/docker/cp-test_multinode-629100-m03_multinode-629100-m02.txt: (14.241665s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m03 "sudo cat /home/docker/cp-test.txt"
E0910 19:49:10.844982    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m03 "sudo cat /home/docker/cp-test.txt": (8.2397792s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m02 "sudo cat /home/docker/cp-test_multinode-629100-m03_multinode-629100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 ssh -n multinode-629100-m02 "sudo cat /home/docker/cp-test_multinode-629100-m03_multinode-629100-m02.txt": (8.2001312s)
--- PASS: TestMultiNode/serial/CopyFile (311.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (67.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 node stop m03: (22.3413438s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-629100 status: exit status 7 (22.5259617s)

                                                
                                                
-- stdout --
	multinode-629100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-629100-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-629100-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-629100 status --alsologtostderr: exit status 7 (22.6235526s)

                                                
                                                
-- stdout --
	multinode-629100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-629100-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-629100-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 19:50:04.926818   10164 out.go:345] Setting OutFile to fd 1664 ...
	I0910 19:50:04.978351   10164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 19:50:04.978351   10164 out.go:358] Setting ErrFile to fd 1668...
	I0910 19:50:04.978351   10164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 19:50:04.990326   10164 out.go:352] Setting JSON to false
	I0910 19:50:04.990326   10164 mustload.go:65] Loading cluster: multinode-629100
	I0910 19:50:04.990860   10164 notify.go:220] Checking for updates...
	I0910 19:50:04.992085   10164 config.go:182] Loaded profile config "multinode-629100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 19:50:04.992085   10164 status.go:255] checking status of multinode-629100 ...
	I0910 19:50:04.993648   10164 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:50:06.908098   10164 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:50:06.908098   10164 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:50:06.908098   10164 status.go:330] multinode-629100 host status = "Running" (err=<nil>)
	I0910 19:50:06.908098   10164 host.go:66] Checking if "multinode-629100" exists ...
	I0910 19:50:06.908697   10164 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:50:08.788952   10164 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:50:08.788952   10164 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:50:08.789699   10164 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:50:11.003555   10164 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:50:11.003903   10164 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:50:11.003903   10164 host.go:66] Checking if "multinode-629100" exists ...
	I0910 19:50:11.019761   10164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 19:50:11.019761   10164 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100 ).state
	I0910 19:50:12.844748   10164 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:50:12.844748   10164 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:50:12.844748   10164 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100 ).networkadapters[0]).ipaddresses[0]
	I0910 19:50:15.105961   10164 main.go:141] libmachine: [stdout =====>] : 172.31.210.71
	
	I0910 19:50:15.105961   10164 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:50:15.106320   10164 sshutil.go:53] new ssh client: &{IP:172.31.210.71 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100\id_rsa Username:docker}
	I0910 19:50:15.205601   10164 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.185558s)
	I0910 19:50:15.214381   10164 ssh_runner.go:195] Run: systemctl --version
	I0910 19:50:15.232282   10164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:50:15.257253   10164 kubeconfig.go:125] found "multinode-629100" server: "https://172.31.210.71:8443"
	I0910 19:50:15.257334   10164 api_server.go:166] Checking apiserver status ...
	I0910 19:50:15.267041   10164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:50:15.298627   10164 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2145/cgroup
	W0910 19:50:15.315802   10164 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2145/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 19:50:15.325329   10164 ssh_runner.go:195] Run: ls
	I0910 19:50:15.331360   10164 api_server.go:253] Checking apiserver healthz at https://172.31.210.71:8443/healthz ...
	I0910 19:50:15.338162   10164 api_server.go:279] https://172.31.210.71:8443/healthz returned 200:
	ok
	I0910 19:50:15.338195   10164 status.go:422] multinode-629100 apiserver status = Running (err=<nil>)
	I0910 19:50:15.338195   10164 status.go:257] multinode-629100 status: &{Name:multinode-629100 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 19:50:15.338248   10164 status.go:255] checking status of multinode-629100-m02 ...
	I0910 19:50:15.338363   10164 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:50:17.259360   10164 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:50:17.259360   10164 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:50:17.259360   10164 status.go:330] multinode-629100-m02 host status = "Running" (err=<nil>)
	I0910 19:50:17.259360   10164 host.go:66] Checking if "multinode-629100-m02" exists ...
	I0910 19:50:17.260035   10164 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:50:19.135513   10164 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:50:19.135655   10164 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:50:19.135718   10164 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:50:21.329607   10164 main.go:141] libmachine: [stdout =====>] : 172.31.209.0
	
	I0910 19:50:21.329607   10164 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:50:21.329607   10164 host.go:66] Checking if "multinode-629100-m02" exists ...
	I0910 19:50:21.338725   10164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 19:50:21.338725   10164 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m02 ).state
	I0910 19:50:23.210890   10164 main.go:141] libmachine: [stdout =====>] : Running
	
	I0910 19:50:23.210945   10164 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:50:23.210945   10164 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-629100-m02 ).networkadapters[0]).ipaddresses[0]
	I0910 19:50:25.420920   10164 main.go:141] libmachine: [stdout =====>] : 172.31.209.0
	
	I0910 19:50:25.420920   10164 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:50:25.421604   10164 sshutil.go:53] new ssh client: &{IP:172.31.209.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-629100-m02\id_rsa Username:docker}
	I0910 19:50:25.516676   10164 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.1776699s)
	I0910 19:50:25.524884   10164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:50:25.546669   10164 status.go:257] multinode-629100-m02 status: &{Name:multinode-629100-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0910 19:50:25.547547   10164 status.go:255] checking status of multinode-629100-m03 ...
	I0910 19:50:25.548328   10164 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-629100-m03 ).state
	I0910 19:50:27.425529   10164 main.go:141] libmachine: [stdout =====>] : Off
	
	I0910 19:50:27.425529   10164 main.go:141] libmachine: [stderr =====>] : 
	I0910 19:50:27.426662   10164 status.go:330] multinode-629100-m03 host status = "Stopped" (err=<nil>)
	I0910 19:50:27.426662   10164 status.go:343] host is not running, skipping remaining checks
	I0910 19:50:27.426662   10164 status.go:257] multinode-629100-m03 status: &{Name:multinode-629100-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (67.49s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (170.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 node start m03 -v=7 --alsologtostderr
E0910 19:50:34.428672    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 node start m03 -v=7 --alsologtostderr: (2m19.823765s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-629100 status -v=7 --alsologtostderr
E0910 19:52:59.949029    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-629100 status -v=7 --alsologtostderr: (30.7107505s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (170.68s)

                                                
                                    
x
+
TestPreload (482.43s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-177400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0910 20:07:14.523918    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 20:08:00.007943    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0910 20:09:10.922342    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-177400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m6.0869464s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-177400 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-177400 image pull gcr.io/k8s-minikube/busybox: (7.8288094s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-177400
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-177400: (37.3205709s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-177400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0910 20:12:43.582384    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-177400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m25.7199689s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-177400 image list
E0910 20:13:00.015837    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-177400 image list: (6.3613947s)
helpers_test.go:175: Cleaning up "test-preload-177400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-177400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-177400: (39.1111697s)
--- PASS: TestPreload (482.43s)

                                                
                                    
x
+
TestScheduledStopWindows (302.2s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-149000 --memory=2048 --driver=hyperv
E0910 20:14:10.934221    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-149000 --memory=2048 --driver=hyperv: (2m56.5253198s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-149000 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-149000 --schedule 5m: (9.2171076s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-149000 -n scheduled-stop-149000
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-149000 -n scheduled-stop-149000: exit status 1 (10.0119953s)
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-149000 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-149000 -- sudo systemctl show minikube-scheduled-stop --no-page: (8.2586151s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-149000 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-149000 --schedule 5s: (9.2940864s)
E0910 20:18:00.041898    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-149000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-149000: exit status 7 (2.0611149s)

                                                
                                                
-- stdout --
	scheduled-stop-149000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-149000 -n scheduled-stop-149000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-149000 -n scheduled-stop-149000: exit status 7 (2.0993037s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-149000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-149000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-149000: (24.7173333s)
--- PASS: TestScheduledStopWindows (302.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-447800 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-447800 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (317.2311ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-447800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.32s)

                                                
                                    
x
+
TestPause/serial/Start (182.06s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-045200 --memory=2048 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-045200 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (3m2.0629948s)
--- PASS: TestPause/serial/Start (182.06s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (351.95s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-045200 --alsologtostderr -v=1 --driver=hyperv
E0910 20:23:00.059605    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-045200 --alsologtostderr -v=1 --driver=hyperv: (5m51.913924s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (351.95s)

                                                
                                    
x
+
TestPause/serial/Pause (7.26s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-045200 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-045200 --alsologtostderr -v=5: (7.2570696s)
--- PASS: TestPause/serial/Pause (7.26s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (10.84s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-045200 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-045200 --output=json --layout=cluster: exit status 2 (10.8436645s)

                                                
                                                
-- stdout --
	{"Name":"pause-045200","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-045200","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (10.84s)

                                                
                                    
x
+
TestPause/serial/Unpause (7.06s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-045200 --alsologtostderr -v=5
E0910 20:28:00.090065    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-879800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-045200 --alsologtostderr -v=5: (7.0610375s)
--- PASS: TestPause/serial/Unpause (7.06s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (7.07s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-045200 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-045200 --alsologtostderr -v=5: (7.0734721s)
--- PASS: TestPause/serial/PauseAgain (7.07s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (43.66s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-045200 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-045200 --alsologtostderr -v=5: (43.6551685s)
--- PASS: TestPause/serial/DeletePaused (43.66s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (21.86s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0910 20:29:11.004669    4724 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-218100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (21.8604555s)
--- PASS: TestPause/serial/VerifyDeletedResources (21.86s)

                                                
                                    

Test skip (29/201)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-879800 --alsologtostderr -v=1]
functional_test.go:916: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-879800 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 4272: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (7.78s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-879800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:974: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-879800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0347258s)

                                                
                                                
-- stdout --
	* [functional-879800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:17:46.853450    8216 out.go:345] Setting OutFile to fd 1176 ...
	I0910 18:17:46.915440    8216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:17:46.915440    8216 out.go:358] Setting ErrFile to fd 1180...
	I0910 18:17:46.915440    8216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:17:46.934444    8216 out.go:352] Setting JSON to false
	I0910 18:17:46.938978    8216 start.go:129] hostinfo: {"hostname":"minikube5","uptime":103530,"bootTime":1725888736,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4842 Build 19045.4842","kernelVersion":"10.0.19045.4842 Build 19045.4842","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0910 18:17:46.938978    8216 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 18:17:46.945886    8216 out.go:177] * [functional-879800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	I0910 18:17:46.948694    8216 notify.go:220] Checking for updates...
	I0910 18:17:46.950952    8216 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:17:46.953367    8216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:17:46.955919    8216 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0910 18:17:46.958496    8216 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:17:46.961649    8216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:17:46.964672    8216 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:17:46.965635    8216 driver.go:394] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:980: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-879800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-879800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0356869s)

                                                
                                                
-- stdout --
	* [functional-879800] minikube v1.34.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:17:41.807619    7688 out.go:345] Setting OutFile to fd 1176 ...
	I0910 18:17:41.888637    7688 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:17:41.888637    7688 out.go:358] Setting ErrFile to fd 1180...
	I0910 18:17:41.888637    7688 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:17:41.908640    7688 out.go:352] Setting JSON to false
	I0910 18:17:41.912641    7688 start.go:129] hostinfo: {"hostname":"minikube5","uptime":103525,"bootTime":1725888736,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4842 Build 19045.4842","kernelVersion":"10.0.19045.4842 Build 19045.4842","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0910 18:17:41.912641    7688 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 18:17:41.916639    7688 out.go:177] * [functional-879800] minikube v1.34.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4842 Build 19045.4842
	I0910 18:17:41.920642    7688 notify.go:220] Checking for updates...
	I0910 18:17:41.923639    7688 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0910 18:17:41.926640    7688 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:17:41.928648    7688 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0910 18:17:41.931651    7688 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:17:41.934649    7688 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:17:41.936651    7688 config.go:182] Loaded profile config "functional-879800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:17:41.937652    7688 driver.go:394] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1025: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard